This is a multi-part build log for Project Obsidian: a low power Ubuntu 16.04 LTS NAS & container server.
You’re currently viewing part 3. Head over to the introduction for context and contents.
Despite some effort on my part it hasn’t been possible to obtain the 6/8TB disks I’m aiming for just yet. I would have continued (and still will) to work on that, however noticed my 16TB MDADM RAID array was flaking out on me a little over the last few days, going even as far as no longer showing up in the system until it was rebooted. (There’s nothing wrong with the disks, it’s the server).
So in an effort to avoid any potential data loss I’m going to make do with what I have now; moving 7 4TB disks from my current AMD FX-6300 storage server into the Obsidian build and a whole lot of extra data migration as a result.
I’m still aiming for the larger capacity disks, and having now decided on ZFS for my system, swapping out the 4TB’s for larger will be a piece of cake.
Luckily, I caught the tail-end of this HotUKDeals find and was able to fetch two MyCloud 4TB external drives for £82! With the extra disk I was able to set up a temporary MDADM RAID5 with one extra 4TB I had lying around and proceeded to rsync all data from the 16TB RAID6 to the 8TB RAID5. A nice, simple command on linux systems to guarantee both files and metadata (permissions, ownership, etc) is:
sudo rsync -avP /source/path/ /destination/path/
-a stands for Archive, which handles the file permissions and ownership
-v is for Verbose, as I like to see in detail what it does
-P stands for Progress, giving me a vague indication of what’s happening by streaming a list of files through the console as it copies them across.
This took the better part of a day to complete. At that point I left the new RAID5 in place for a couple of days having mounted it in place of the old RAID6 through
fstab (seamless change on a reboot) and haven’t noticed any issues.
So with the temporary RAID5 in place and data migrated, I shut it all down and began stripping down the storage server. There’s no build video for day 2, it was all a little manic.
In the above image I’ve mounted 3 4TB WD RED NAS hard drives in the 2 5.25″ bays, later joined by the system 120GB SSD. After first destroying the 16TB RAID6 from within Ubuntu I powered the server down and began disconnecting the drives in the bottom Cooler Master 915R. The beauty of a case like this is being able to mount the drives separately from the main system and easily remove the whole chassis in situations such as this.
Once the disks were disconnected, the 915R uncoupled from the 925 and moved out of the way, all surplus cables were removed leaving the 925 but a husk of the mammoth system it was before:
And no, still not cable managed. Yet.
With the storage server back up and running and everything looking good, I proceeded to transport the 915R and its disks downstairs to a waiting 915F housing the compute module.
After a lot of dusting (it’s impossible to get in there when they’re stacked), stacking the storage module on top of the compute module and connecting a whole heap of wires, it was ready to boot:
At this point it’s worth pointing out none of this is very black and that would be right. I haven’t yet wired up the all-black power cables and given the rather quick turnaround on moving the disks, simply reused the far-too-long SATA cables I already had. As and when the parts come in I’ll publish some updated pictures.
The system is up and stable. I’m still not pleased about having 4 disks on a PCIe card and 3 on the motherboard, but until I can find a 4-channel, 8 port SAS/SATA card that won’t cost more than the rest of the system combined (disks excluded) there’s little other choice.
So that’s all for this update. In the next I’ll cover off some Ubuntu configuration and RAID setup.
There are no sponsors just yet.
Interested in helping out? Sponsors get a mention in every post and frequent shout-outs on social media. For this build I’m currently looking for high capacity drives (6-8TB), PCIe SATA/SAS solutions and cooling options aimed towards near silence.
Free free to get in touch to discuss this or any other topics you have in mind!