Ubuntu 11.10 on Sun X4540 ("Thumper")
I installed Ubuntu 11.10 Server x64 edition using the Java ILOM client - you must use a 32bit Java client to connect a CD image. I set up a MD RAID1 across the bootable devices of controllers 0 and 1 (first disks on 0 and 1).
Post-install I encountered a blank, black screen on the GRUB 2 stage (i.e. after the kernel selection screen). To fix this edit the boot parameters on the kernel selection screen:
To make these changes permanent after booting:
(reboot to check everything worked correctly)
Installing native ZFS and setting up a ZFS pool
I created a small Perl script to set up a zpool over the remaining 46 disks. The scheme I use is 4 hot spares (the first disk of the other 4 controllers) with 7 raidz RAIDs over 1 disk per controller.
Usage:
Warning! This will destroy any data you have on the disks in your system. Only use this if you *really* know what you're doing.
The source code for the script:
Post-install I encountered a blank, black screen on the GRUB 2 stage (i.e. after the kernel selection screen). To fix this edit the boot parameters on the kernel selection screen:
set gfxpayload=text
...
linux [...] rootdelay=90
To make these changes permanent after booting:
sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=90"
GRUB_GFXMODE=text
sudo update-grub
(reboot to check everything worked correctly)
Installing native ZFS and setting up a ZFS pool
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:zfs-native/stable
sudo apt-get update
sudo apt-get install ubuntu-zfs
I created a small Perl script to set up a zpool over the remaining 46 disks. The scheme I use is 4 hot spares (the first disk of the other 4 controllers) with 7 raidz RAIDs over 1 disk per controller.
Usage:
Warning! This will destroy any data you have on the disks in your system. Only use this if you *really* know what you're doing.
sudo perl [script.pl] --create
The source code for the script:
#!/usr/bin/perl
use Getopt::Long;
use strict;
use warnings;
my $usage = "$0 [--dry-run --create --destroy --dump]";
GetOptions(
'dry-run' => \(my $dry_run),
'create' => \(my $opt_create),
'destroy' => \(my $opt_destroy),
'dump' => \(my $opt_dump),
'help' => \(my $opt_help),
) or die "$usage\n";
die "$usage\n" if $opt_help;
my @DISKS;
open(my $fh, "<", "/var/log/dmesg") or die "Error opening dmesg: $!";
while(<$fh>)
{
next if $_ !~ /Attached SCSI disk/;
s/^\[[^\]]+\]\s*//;
die if $_ !~ /sd\s+(\d+):0:(\d+):0:\s+\[(\w+)\]/;
my( $c, $t, $dev ) = ($1, $2, $3);
$DISKS[$c][$t] = "/dev/$dev";
}
shift @DISKS while !defined $DISKS[0];
if( $opt_dump )
{
foreach my $i (0..$#DISKS)
{
print "Controller $i:\n";
foreach my $j (0..$#{$DISKS[0]})
{
next if !defined $DISKS[$i][$j];
print "\t$j\t$DISKS[$i][$j]\n";
}
}
}
# system disks
$DISKS[0][0] = undef;
$DISKS[1][0] = undef;
my @spares;
for(@DISKS[2..$#DISKS])
{
push @spares, $_->[0]
or die "Missing disk at $_:0\n";
}
my @pools;
foreach my $i (1..$#{$DISKS[0]})
{
foreach my $j (0..$#DISKS)
{
$pools[$i - 1][$j] = $DISKS[$j][$i]
or die "Missing disk at $j:$i\n";
}
}
for(@pools)
{
$_ = "raidz @$_";
}
if( $opt_destroy )
{
cmd("zpool destroy zdata");
}
if( $opt_create )
{
cmd("zpool create -f zdata @pools spare @spares");
}
sub cmd
{
my( $cmd ) = @_;
print "$cmd\n";
system($cmd) if !$dry_run;
}