Close

Dell R210 ii Server Part 2 – Initial Installation

The previous post covered updating all of the firmware and some of the hardware configuration.

I already mentioned that I installed 4 x 300GB 10K SAS drives.

I performed some initial testing of XCP-NG and Xen Orchestra by mounting the ISO under the Virtual Media manager for the console and booting up to an install from there.

Once XCP-NG was installed, I connected to the server via SSH and pulled a Xen Orchestra appliance from GitHub, this installed a ‘built from sources’ appliance that did not require a license.

I then created a local directory for some ISOs and downloaded Ubuntu and Debian ISOs and also created a Zpool on 3 of the 300GB disks.

I set up a couple of VMs, using each of the Linux distros and then started hearing a strange beeping sound from the R210 ii.

I checked dmesg and it turns out that one of my hard disks (sdc) has some issues, no worries, I have several more, I will just swap it out.

After swapping what I thought was disk 3 twice and still having the same issue on the same disk, I did some deeper investigations.

Well it turns out that disk sdc is not the one attached to cable number 3, but cable number 1, so much for the cable numbering!

I worked out which disk it was based on the wwn name of the disk (even though this was not 100% the same as what was actually written on the disk!

Third time was aa charm, after replacing this disk, the problem beeping has finally gone away.

The current installation is only temporary as I have also ordered 4 x SDSA5AK 64GB half slim SSDs from CEX for £6.00 each (normal price elsewhere anything up to £90.00!).

I also ordered a SATA 1 in 5 out power breakout cable from Ebay for £3.95.

I already had a spare mini SAS 36 pin (SFF-8087) to 4 SAS 29 pin (SFF-8482) cable I could plug into the H200 raid card.

The intention is to use the 64GB SSDs as boot devices, the XCP-NG install can use mdadm to mirror the OS and only uses 18GB of the first disk(s), the remainder is available for other uses.

I will create a raid1 zpool using the remainder of the OS disks, plus the other two 64GB SSDs for VM OS disk installs, the 4 x 300GB SAS drives will be setup as a raid1/raid10 zpool for any additional storage requirements.

I am not a fan of raid5 or raidz unless a large number of disks is being used (30+ in sets of 10) as too much risk is associated with more than 1 drive failing and if it does, then the entire raid set is lost (this has happened to me more than once).

I prefer to make use of 1 to 1 mirroring where the chances of both disks in the same mirrored pair failing is low and up to half of the disks in the array can fail without any data loss.

With 4 disks in a raid5 configuration after one disk has failed, there is no longer any redundancy and a second disk failing leads to total data loss.

With 4 disks in a raid1 or raid10 configuration after 1 disk has failed, half of the storage is no longer protected, but the other half still is. A second disk failing has a 2/3 chance of being from the second pair and only 1/3 chance of being the second disk from the first pair.

Raid1 or raid10 does not allow for the maximum size of array, but does allow for the best redundancy with a small number of disks.

ZFS uses striping across all disks in each half of a mirrored pool, so when mirrored is raid10, it is also copy on write, so avoids creating hot spots for writes and creates a checksum for each block of data in the one above, so if mirrored can self heal in the case of data corruption or ‘bit flip’ events.

I don’t envision any of my VMs requiring large levels of storage so 110GB for OS disks and 600GB for data should be more than plenty, the CPU and memory will probably start to creak way before I use up all of the storage.

Leave a Reply