Adventures in Home Computing
In July, my server at home died. It was a very old FreeBSD system with a Pentium 4 CPU, half a gig of RAM, and the last CRT in the house. I had been using it mainly as a file server. I spun up a VM on my cloud provider, restored a few critical files to it, and then did nothing about the server for months. When I did, I learned two things: there would be no easy (cheap) way to revive the hardware, and my storage archive was inaccessible.
The server had been equipped with a hard drive toaster, an external docking station that let me plug in bare, internal SATA hard drives. I was using this for my storage archive. I had amassed more than 50 TB of media files on more than 20 hard drives. The problem is that I was using FreeBSD’s native UFS filesystem, which is not mountable on Windows or Linux. I could only access the files if I had a working FreeBSD system.
I screwed around with a NUC for a bit, installing Rocky Linux on it. After I realized my storage problem, I installed FreeBSD on it. I was able to access the files on the disks, but where should I put them to make them more accessible? I am currently moving the archive to external 2.5-inch USB hard drives. They come formatted for NTFS, so they can be attached and mounted directly on a wide range of systems.
That’s fine for my archive, but what about files I need to access readily? I finally invested in something I have been wanting for years: a NAS appliance. I had been looking at Synology, but they had recently pulled a fast one, locking their customers out of commodity hard drives and forcing them to use Synology’s own private-label drives. They have mostly walked that back, but only after they showed their true colors. So I bought a UGreen NASync DXP6800 Pro, a 6-bay NAS appliance. I filled it with 20 TB Western Digital “Red” NAS drives. In a RAID 5 configuration, this yields 100 TB of redundant storage.
The NAS has a few bonus features, and one of them is that it can host VMs. I didn’t take it seriously at first, but I’ve started to appreciate it. Among other things, it means I have an easy place to experiment with alternate operating systems. I expanded the RAM in the NAS for this purpose. I have a Rocky Linux VM running on it, and I have built VMs I can start up on-demand running FreeBSD and Mint. I intend to build some Ubuntu and Debian VMs, so I can explore and learn about the differences.
Losing my server has made me nervous about data backups. One thing the UGreen NAS doesn’t do well is backups. Yes, it comes with a backup function, but it doesn’t work very well. It is glommed onto their general-purpose app, it has a few show-stoppers, and it lacks the functionality of a good backup agent. I couldn’t even back up my whole Windows workstation. After a little research and a couple of trials, I bought a license of Acronis True Image. My Windows workstation and my laptop are now being backed up to my NAS, quietly and reliably.
Getting the backups to work led me to discover bad spots on the hard drive in my workstation. It has been equipped with an SSD for the C: drive and a spinning disk for my workspace where dump and edit photos, video, and audio files. Unbeknownst to me, the spinning disk had been developing bad spots. So I invested in a high performance 4 TB M.2 NVMe SSD. The thing is fast: 50 times as fast, if my little performance test is correct. And of course it is silent.
All of this has renewed attention to my cloud servers, most of which are getting quite old. My “new” web server is seven years old. WordPress has started complaining that the version of PHP it’s running is no longer supported. Building new servers is easy, but migrating applications from one server to another, especially countless web sites, is time consuming. Case in point, in seven years I haven’t actually migrated all of the sites from my “old” web server to my “new” one.
So, I have finally invested time to accomplish the challenging task of automating web site migrations. Using a set of Python and bash scripts on my web servers, primary DNS server, and SQL server, I now have a solution to quickly and easily move web sites from one web server to another. At this point, I am proceeding cautiously, for fear of bugs. One of the problems I am facing has to do with certificates.
Most of my web sites are secured with TLS, which requires certificates for each site. I am using Certbot to automate the acquisition and renewal of certificates issued by Let’s Encrypt. Let’s Encrypt does a number of things to ensure authority for the site bring protected, and Certbot keeps some data files on my web server related to the certificates and the account that was created for my server. Migrating the certificates themselves is easy, but without the data files, will Let’s Encrypt renew them? No. And if the certificates were issued to a different account (for a different server), will Let’s Encrypt renew them? It turns out, also no. So my choices are to delete and recreate certificates as I migrate sites, or reconfigure the new server to have the same account as the old server. I’m going to try the latter.
On top of all this, my employer is pushing migrations to Microsoft Azure and a movement toward DevOps. I am actively learning new technologies and methodologies. I think I may develop a couple of cloud applications on Azure. I will use the opportunity to gain first-hand experience with CI/CD methodologies. And now is a good time to learn Python properly.
Why Rocky Linux? For two decades before my current job, my preferred Unix was FreeBSD. When I started with my current job, I was required to use Red Hat Enterprise Linux (RHEL) and Oracle Linux (also based on RHEL). It became the Linux flavor I was comfortable with, and I had never gotten deep into Ubuntu or Debian based Linuxes. For my personal needs, I started using cloud VMs running CentOS. Then a few years ago, after Red Hat had bought CentOS, they folded it into their development cycle and it became non-viable as a production operating system. I went looking for an alternative, and it was Rocky Linux. I haven’t been disappointed yet. It is a clean, no-nonsense Fedora build, releasing just a couple weeks after Red Hat.