In a desultory fashion, I have been converting a trashpicked classic all-in-one Mac into a whimsical NAS device (see Youtube video 1, video 2 and video 3). This device is ultimately intended to replace a dedicated Linux PC that has been serving the role (through various hardware upgrades) for about seven years now.
This project has been somewhat on hold because I hadn’t bought new hard drives for it yet. Part of the reason I was in a paralysis of indecision here is that I hadn’t settled on the right “brain” for this box. Ideally, I wanted a small, low-power fanless ARM-based board (running a reasonably normal Linux distribution) with multiple SATA ports to connect directly to the hard drives. Unfortunately, this doesn’t really exist. So the next best option would be a similar board with USB3.0 ports. Again, there is no good candidate for this role either. Of course, I have a lot of assorted potential “brains” of different capabilities lying about (NextThing CHIP, Raspberry Pi, Beaglebone Black, etc), but I didn’t have a way to decide which one would be best until I bit the bullet and bought hard drives. The other weekend, I finally scared the moths out of my wallet and bought a couple of Western Digital 8TB My Book external USB 3.0 drives (WDBBGB0080HBK-NESN). Interestingly, it’s cheaper to buy these packaged external drives than it is to buy the same HDD bare (about $180 each at the time of writing, vs $240 for the drive by itself). I figured that whatever route I went, I could either use the drives as-is over USB, or crack them and extract the media if I wound up using a direct SATA connection.
For infrastructure reasons, my fastest setup path was to use a Windows PC to consolidate the 2TB and 3TB drives in my existing NAS onto these 8TB spinners. The volume size, and the sizes of the files that need to reside on this drive, precluded the use of FAT32. Since I knew that Linux NTFS support is based on reverse-engineering, and exFAT support was based on actual sourcecode access, I decided to format as exFAT and transferred all the data onto these exFAT volumes. This turned out to be a strategic catastrophe, as you will learn later, but after two days of copying files over 100baseT I finally shut down the old server (my office is eerily quiet now) and started eagerly on setting up my first brain.
My first candidate brain was an old Raspberry Pi Model B (700MHz single core ARM, 100MBps Ethernet, 2 USB 2.0 ports via an onboard hub). I hadn’t even powered this board up in years, but it’s still supported by the latest raspbian distro (2017-11-29-raspbian-stretch) and it’s not horribly difficult to set up. The hardware is very suboptimal – besides the slow CPU, there’s really only one USB port whose bandwidth is shared across 3 devices via an onboard hub chip. The BCM2385 SoC’s USB support is also heavily dependent on software rather than handling things in hardware, which is another strike (the Beaglebone would have been a much better choice in this regard). However, my needs aren’t very demanding so I figured it was worth trying out and benchmarking, at least.
Setup isn’t too complex. I want to be able to access the USB drives via SMB, I want to be able to SSH into the box, and I also need to VNC into an X desktop on this machine (it needs to run a couple of graphical Linux apps).
Because the RPi autodetects composite vs HDMI video attached, and I want more X11 resolution than TV mode, I need to edit /config/boot.txt and uncomment/add the following (mode 16 is 1024×768@60Hz):
hdmi_group=2
hdmi_mode=16
Setting up VNC is easy enough through raspbian’s GUI. Depending on the VNC client you’re using (I use UltraVNC on Windows 10) you may need to tinker with authentication settings.
Configuring samba and the hard drive is also simplicity. To add exFAT support and mount the drive:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install exfat-fuse
sudo mkdir -m 1777 /share/8tb_1
sudo blkid
(the latter command will provide you the UUID of the attached USB drive)
Then edit /etc/fstab to add (where xxxx is the UUID provided by blkid above):
UUID=xxxx /share/8tb_1 exfat defaults,auto,umask=000,users,rw 0 0
sudo mount -a will mount the above drive, and now you’re good to edit /etc/samba.conf and (besides setting basic computer/workgroup information as normal) add a share stanza like:
[share]
Comment = 8TB drive 1
Path = /share/8tb_1
Browseable = yes
Writeable = Yes
only guest = no
create mask = 0777
directory mask = 0777
Public = yes
Guest ok = yes
Yes, this is on a private network. No, these aren’t good defaults, though they do make life a bunch easier if you, like me, are connecting to these shares from Windows, Linux, Android and MacOS. This isn’t going to be an article about samba security.
I also added a samba sharepoint for a scratch directory on the 8GB class 4 SD card I was using, mainly for benchmarking reasons. Speaking of benchmarks, it’s time to benchmark. I used a roughly 120MByte file for testing up (to RPi) and down (from RPi) speeds. The other end of this test was a Windows 10 PC connected via 100bT Ethernet to the same, uncongested network as the RPi, and the file was being transferred to/from that PC’s internal SATA drive. From testing USB3.0 transfers, I know that the internal drive of that PC can sustain at least 50MBPS file transfers in both directions with no difficulty.
Benchmark? Super meh. Download (from SD card to computer) – 4.5MBps. Upload (from computer to SD card) – around 7.25MBps, but also very sawtooth. Download from exFAT USB HDD to computer, 5.2MBps; upload, 6.43MBps. The SD sawtooth likely reflects buffering and block erase times in the flash filesystem and the controller inside the SD card itself. The SDHC spec stipulates 4MBps for class 4 cards, so these results aren’t entirely sensible, but they’re interesting. There are an awful lot of factors in this result – the samba protocol itself, the CPU speed of the RPi, the buffering settings of the (ext4) filesystem in use on the card, the performance of the underlying Linux block driver, and the vagaries of both the card’s onboard controller and the NAND flash comprising the storage array itself. It is crucial to reiterate that my benchmark file for these tests was about 120MBytes in size; this is highly relevant to the Robinson Crusoe moment you’re going to read about later. However, it’s important to be aware that 5MBPs is enough to support playing back a 1080p MP4, which is all I need. The real annoyances will be bulk moving and copying of files onto and off the storage array.
Hmm. My next thought was to use one of my spare ThinkPad X230 laptops. This machine has two USB3.0 and one USB2.0 port, and is assuredly frisky enough to provide a very good experience – it also has 802.11ac WiFi and wouldn’t even need an Ethernet connection (though I’d still add one). Unfortunately, as you can see, it won’t fit in the Mac chassis. Even if I removed the screen half of the clamshell, this machine is just too large. I did briefly think about using my old PowerPC Mac Mini, but put it towards the bottom of the consideration pile because of its high power consumption, only two USB 2.0 ports, need for an old PATA boot device, and physical size. It would have outperformed the RPi, though. What I really would have liked to use, if I had one handy, is something like an HP Stream 200 mini all-in one, but I didn’t buy one when the impulse took me a couple of years ago, and they are discontinued now. Besides, I really wanted to use something I had on hand, if possible. And luckily, I had something eminently suitable in my junkpile AND I’m an inveterate Amazon shopper…
Brain 2 – is this sounding like Young Frankenstein yet? – is an old Compaq Mini 110 (now HP) netbook. This is close to the bottom, but not the utter bottom end of netbooks, from the age when those were a thing. It has a single-core hyperthreaded 32-bit Atom N270 CPU, 1GB RAM (on a removable SODIMM, but not expandable), internal Broadcom b/g/n WiFi, three USB 2.0 ports, a webcam and a VGA port. The internal HDD interface is SATA. I bought this machine, with charger, dead battery, and what turned out to be a dead hard drive also, on eBay for $11 including shipping, some time ago. As it turns out, I had some kind of Amazon fugue state situation going on where I ordered a tiny 16GB SATA drive for it, installed it, and completely forgot about it. So when I pulled this machine out of my “misc. laptops” drawer, I was intending to boot it off a USB device and see if I could make it useful that way – I was super happy to find that it already had a usable internal drive and I immediately slapped Debian 9 on it after first measuring to see if it fit in the Mac chassis – which it does nicely, as you can see. Due to the small drive, I configured it with no swap, on the assumption that since the RPi is fine with no swap and 512MB RAM, x86 Debian should be happy enough with a 32-bit distro and 1GB RAM.
Getting the OS configured to my requirements was a bit more involved than the Raspberry Pi, mainly because I haven’t used LXDE in a while and don’t remember anything about it. I wanted the box to autologin to an X11 desktop and start a VNC server sharing that desktop (NOT a separate X session – the machine doesn’t have enough RAM), to do which one must:
Edit /etc/lightdm/lightdm.conf, look for the defaults section (not the [LightDM] section) and uncomment:
autologin-user = (name of user here)
Install TigerVNC server using apt-get install tigervnc-scraping-server
Use vncpasswd to generate a ~/.vnc/passwd file
Add TigerVNC server to startup items as x0tigervncserver PasswordFile=/home/username/.vnc/passwd (you can either do this manually or by editing ~/.config/lxsession/LXDE/autostart). This point took me an hour to figure out, because even though the autostart file is user-specific, and resides in the user’s home directory, the items that are in it are NOT run as the logged-in user – so PasswordFile=~/.vnc/passwd will silently break. The server will be listening, but it will reject all passwords; very puzzling until you figure out what’s going on.
Note that most of the guides to installing VNC servers on Debian wind up installing a secondary session running on host:5901. This is fine, except that the machine I’m using gets into an ENOMEM state rapidly when trying to run two sessions at once. Maybe that’s fixable by adding swap, but a superior option is just to have a single X session shared by the scraping server at host:5900.
I also set up samba using essentially the same configuration data as for the RPi above; nothing exciting or different there.
Benchmarkity time. 11.2MBps down from SSD, 11.3MBps up. Downstream speed from the USB HDD was essentially the same as from the internal SSD. Yay? Well… no. Not yay. I had observed that when I was copying to the USB drive, the start of the copy operation took a really long time to start; as much as 30 seconds to start copying a 120MB file. After running these tests, I wanted to try a longer sustained write, so I selected a file about 2GB in size. This test succeeded to the internal drive, but consistently failed to the USB drive, with error 0x8007003B. I didn’t try this copy operation from other OSes, so I don’t know if this was Windows-specific. Googling that error message leads to various advice like limiting the max SMB protocol version, which I tinkered with – but it doesn’t help. I went back to the Raspberry Pi and found that this symptom also replicates on that platform, so it appeared to be something inherent to the drive, or to the Linux samba stack in general.
Then, on a hunch, I tried using a drive formatted NTFS. Not only did this work fine without any errors, it also didn’t show the weird hang on starting a copy operation. Again, I verified this on both the x86 and the RPi. So this issue appears to be with exFAT support – either the filesystem driver is buggy, or whatever it does when starting a long copy operation (allocating space, maybe) is causing samba to timeout internally. In fact, it might even be a bug that’s specific to large files >2GB on a 32-bit operating system. Anyway, given that NTFS is also not really fully trustable in Linux (IMHO; if this is your sacred cow, please give its ears a good skritch and tell it to ignore the bad man), this means I either need to acquire at least one more 8TB drive, format it as ext4 and transfer data across, or use the old drives from the former NAS as a similar staging area.
So I’m stopped again until I have a weekend to do another bulk data transfer – but at least I know exactly what I’m dealing with, and have a demonstrably workable plan now. Either of the computers above will do for the application, but my preference will be to use the netbook if possible. There are two issues I’ll have to solve for that – first, since the machine will be locked away inside the Mac housing, I’ll need to figure out how to get it to power up automatically. Possibly, a cap across the power switch (it’s actually a momentary switch) will provide enough of a hold-down to trigger the powerup logic. If not, I can wire an externally accessible power button across the switch. Secondly, I have to tweak power management so that the machine will stay powered up while the lid is closed – at the moment, Debian goes to sleep every 15-20 seconds while the lid is closed, and flicking the power switch only wakes it up for another 15-20 seconds. Neither of these seem like major issues, but if either one seems insurmountable, the Raspberry Pi is an acceptable, if less desirable option.
1 thought on “Storage Trek III – The Quest for NAS (long)”