Home Cloud

Introduction

This article describes the high-level layout of my computing infrastructure at home, facetiously called my “home cloud”. I have been operating some compute resources beyond a desktops for a long time (covering Windows Servers, SunRay setups, fiber-channel arrays, etc.), but over the year I drifted slowly to this current setup which build largely on inexpensive commodity hardware and openly available software… kinda like large public cloud providers. 

Talking about the public cloud: why not go that route? For once, I am deeply distrustful of most PaaS and SaaS providers when it comes to respecting my privacy. Other than commercial customers, I cannot negotiate a reasonable service and confidentially agreement leaving me to the generally very unfavorable standard Terms of Service that Google and others provide. In addition, there have been plenty of incident where such large providers have not even honored their contractual obligations towards their non-private customers. While Amazon and other IaaS providers may be slightly less prying into my privacy and personal life, they are extremely expensive to operate (both administratively and financially) and not really suitable for operating a meaningful set of services for home use.

 Given that there will likely always be at least a small storage server at home to facilitate rudimentary file sharing, basic local backup, and some other minimal services, it was quite natural to see this setup evolve over the years into a “mini cloud” at home that provide most basic services. Leveraging virtualization and other technologies I can provide “right-sized” fractional compute and storage resources to my and my family. The rest of this article talks about the technologies and products that I found useful to create this environment.

Data Storage

FreeNAS is an open source operating systems based on FreeBSD. It is configured to leverage a number of highly useful BSD features and packages all these up into a nice Network Attached Storage (NAS) server. FreeNAS comes with a web based GUI, making it very easy for BSD novices to configure and manage this environment. At its core, FreeNAS unlocks FreeBSDs support for ZFS and enables users to manage even relatively complex sets of storage disks without becoming a disk management specialist. 

In my setup, I run FreeNAS on a 32 GB system with a simple server-class Intel Atom board. The boot device can actually be a USB drive since all configuration parameters can be maintained as part of the ZFS storage volumes, making updates and moves quite easy. I have 4 small 1 TB laptop drive in an IcyDock that contains my main array (ZPool or pool in ZFS parlance). The pools uses a special ZFS RAID mode, giving me 2 TB of effective storage. A second pair of slower 3.5” 2 TB drives provides a mirrored 2 TB volume that I use for “slow storage” such as backups. The server board has 4 Gigabit network port, one of which faces the internal network. The second port connects into a physically separate subnet that is exclusively reserved for iSCSI traffic. 

ZFS comes with a number of great additional features that are perfect for managing storage. By implementing copy-on-write (COW), it is trivial to enable simple snapshots that can be preserved for a defined time. This is very useful for restoring old versions of data on case of data corruption, client-side ransomware, etc. In addition, the snapshots (or more precisely: the deltas) can easily be sent via SSH to a remote ZFS system, making offsite disaster recovery to another home location quite possible and surprisingly reliable. 

These ZFS pools can then host different datasets (filesystems) and volumes (raw storage for iSCSI), which can in turn be shared out for different purposes and through a variety of protocols. I am using CIFS (Windows and generic shares), NFS (Linux server storage), and AFP (MacOS, especially for Time Machine backups). One note: it proved to be most beneficial to create a dedicated AFP share for each Time Machine backup location. You can also share WebDAV, and FTP, SFTP, and others, depending on your needs.  

FreeNAS also offers “Plugins”, i.e. Pre-packaged add-on packages the are well isolated from the core operating systems (through jails, which are a MAC-enforced isolation mechanism conceptually similar to Solaris zones or docker containers).  

One note of caution: while I was really not enthused to invest into ECC memory, fully supported NICs, and a relatively decent server board, it ultimately turned out to be a hard requirement. I ran FreeNAS for more than a year on commodity desktop hardware with limited issues. However, I did notice the occasional kernel panic and random reboot. This is bad for any computer, but for a ZFS based systems with a huge in-memory filesystem cache (ARC) any in-memory bit-flip can be a significant reliability issues. As such, finally upgraded the FreeNAS box to a more appropriate server board with 32 GB of ECC RAM. 

Virtualization

Compute capabilities are provided through a XenServer pool with two Shuttle PC based machines. Both have 16 GB or RAM which is getting short these days), and a minimal local boot disk. Both boxes have at least two NICs (one for internal network, one for the iSCSI network). Running XenServer is really not very difficult: using the provided XenCenter (or alternatively the freemium option of XOA – Xen Orchestra), it is really not very difficult to create the XenServer Pool, configure the networks, attach the iSCSI storage and CIFS ISO image storage from FreeNAS, and finally create VMs. 

Running XenServer in a pool makes it a lot easier to upgrade the servers and also manage the overall system: since VMs can be configured to scale their memory from a low bound to a high bound, it is possible to optimize RAM usage for normal operations and leverage the entire available RAM space on both servers). However, for upgrades I have to move all production systems to the system currently not being upgraded – here comes the lower memory bound in handy, since I can squeeze the guest systems without too much issues onto a single XenServer for a short time. This obviously implies that all guests are using iSCSI volumes for their disk services to make the transition between XenServers possible. 

Core / Print

A core service to host myself is DNS resolution and NTP services. DNS is located on a server to provide native name resolution without forwarding to external DNS servers: this makes it a little harder for the ISP to track my DNS queries, and it enables effective internal DNS naming conventions that are not resolvable from the outside. I am managing this server and the DNS zone files (like all other Linux based servers) through Webmin, which is proven, effective, and quite easy to use. While I was thinking about a DNS to act as secondary at some time, it really does not make a lot of sense in a very small virtualized environment. NTP is useful to have to make sure that all local resource are within a relatively small time drift window. 

Print services are mostly not necessary, however a CUPS service does come in handy if you need to present legacy (i.e. Non AirPrint-capable printers) to iOS or other mobile devices. Setting up the printer on the CUPS server and having it advertised through mDNS is really simple and allows even more arcane printers to be used for an extended period of time. One caveat: as soon as you start to segment your network, it becomes incredibly hard to have the mDNS/Bonjour service multi-case into all segments; Bonjour really only works well within a single network collision domain. This was ultimately one of the drivers for me to migrate my traditional network to a VLAN enabled design, which allows now to route all packages to the XenServer guest providing print services, while limiting access for all others, effectively enabling appropriate network segmentation. 

PKI

While the core services are only internally facing (at this time), it makes a lot of sense to operate a very simple PKI: some services actually require HTTP/S and it becomes quite annoying to trust a multitude of self-signed certificates all the time. This can actually also lead to enduser-fatigue and ultimately a less-secure environment. Having a single root CA makes it easy to add this trust anchor into all end-devices and – as such – enable secure connectivity. 

While a solution like Dogtag would be really cool to have, it is also really a lot of work to properly manage. Given the amount of certificates that I need (10-15 right now), and considering that I am not really too worried about revocation, a simple offline PKI management tool like XCA is completely sufficient. It allows me to template HTTP/S servers and (eventually) user and device certificates, and keeps all data in a small, easy to back up data base file. 

Owncloud

One of my most critical system is Owncloud: it allows me to maintain a “real” home directory that is synchronized across different (and changing) clients. Using WebDAV and the synchronization mechanism, I can select all appropriate documents for a particular endpoint (personal laptop, family desktop, phone or tablet, etc.), knowing that changes will eventually ripple through my entire eco system. At the same time, I am not depending on the goodwill (or lack thereof) from cloud storage providers such as Google to honor their pledge to not sift through my data. 

I initially used this only for myself, but found eventually that all family members can benefit from this. My current standard setup is to use the default desktop storage location (for MacOS: /Users/username/Documents) and point the Owncloud desktop client right there. This ensures that all truly critical documents to get replicated to the FreeNAS server (which hosts the Owncloud guest and the NFS location with all its data), and then further on to all other endpoints. Even if a user is traveling or working outside of the house, they still have native access to their files. 

Since Owncloud uses WebDAV for its replication, it is really easy to add other services (such as e.g. OmniFocus) into this environment. Similarly, Owncloud can make Calendar and Contact services available via CalDAV and CardDAV, respectively. Finally, it is now also possible to use online editing solutions such as the Collabora CODE environment to enable collaborative editing as well. The Owncloud (and other similar projects such as NextCloud, etc.) solution also enables developers to add their own extension, making this platform highly customizable. Existing extensions include multi-factor authentication as well, making it potentially a system that I may eventually expose beyond the firewall. 

Zoneminder

Keeping an eye on the house is important, especially when you are traveling or spending time away from your home. I have been looking for a reasonable solution for quite some time, but the standard home-surveillance packages are either annoyingly low-tech or really expensive. After finding the open source Zoneminder package, I had a plan to use very cheap cameras to start managing my own home security environment. 

Zoneminder runs as a guest on virtualization hardware, but had access to a large amount of storage on the NAS box. This allow weeks of video and makes it also possible to leverage the remote replication for disaster recovery/backup purposes. The cameras I picked are very simple Foscam units (yeah, I know – Chinese crap), but they support 720P, low-light recording, and – very importantly – Wifi connectivity. The only cabling required is a simple USB plug for power setup. Integration with Zoneminder is almost trivial, but fine-tuning the alerting is quite painful (read: you will have a lot of false positives through light/shadow changes, etc.). 

Centralized Security

I have been going through a number of security solutions to provide proper protection of my resources. As a security professional, I do understand (at least some of) the risks that are prevalent in today’s environments. While may open source solutions (such as e.g. pfSense) provide adequate protection, they are often complex to setup and even more complex to maintain. The Sophos UTM solution (in the form of the free Home virtual appliance or as a box) is sufficient and price effective, so that’s what I got. However, there are plenty of other similar solutions (Fortinet would come to mind quickly) that may do the same thing. At its heart the UTM is the edge firewall and core router, enabling all networking functions. I am using the Web GUI which is reasonably easy to understand. 

VPN

Being able to access my internal resources is critical to me: for once, I have the Owncloud service only running internally, but any administrative tasks (updates while traveling, remote desktop support for the family, etc.) also requires direct access. As such, a simple, yet effective VPN solutions is very desirable for my setup. 

To get started with this, I actually need a static way to address the firewall. This can obviously not be done through a residential ISP, since their willingness to assigned static IP4 addresses is rather limited, and I am really not interested in subscribing to overpriced SMB network packages that I ultimately do not need anyways. I was playing around with IP6 routing through the Hurricane Electric infrastructure, but this is not really convenient or (for the most part) sufficient in these days. As such, a simple subscription to a dynamic DNS service not provides me with the necessary static DNS name: updates are performed on an hourly basis by the UTM, and pushed out with a 60 sec TTL to the DNS infrastructure. This is sufficient for my purposes. 

With an externally resolvable address it is now possible to enable common VPN technologies such as L2TP over IPSec or IPSec/IKEv2. The former has so far been the easiest to enable, since most endpoint (for me: iOS an MacOS) support full and easy to configure support out of the box – I really do not like adding special-sauce clients unless absolutely necessary. IPSec/IKEv2 is currently an area of interest, mainly since iOS supports Always-On VPN for that protocol alone. 

Remote Connectivity

To get proper use out of my NAS replication capabilities, I decided to setup a really old laptop with a 2 GB drive as a destination for data replication from the main NAS. This machine receives the ZFS snapshot roughly every 6 hours, so even if my house burns to the ground, I have most critical data replicated off-site on a reasonably safe system (again, without any of the public storage providers possible being able to read my data – yay!). 

This setup obviously requires a fairly stable connection between the weekend cottage and our home. After spending a lot of time with pfSense and various (moderately effective) VPN clients, this use case was one of the drivers to get into the Sophos UTM solution. Sophos allows to run UTM in “Remote Ethernet Device (RED)” mode. Through their own discovery and rendezvous service it is possible to quickly find a remote RED system and then leverage Sophos’s internal IPSec system to create the connectivity. The biggest utility of this is ease-of-use, making it possible to create permanent network route to a remote site in less than 10 minutes. 

The stability of this connection is obviously mostly dependent on the availability of the residential ISPs on both sides. So far it turns out that I have mostly 99.0% availability or better on most days. This is sufficient to operate the ZFS replication, utilize the Owncloud WebDAV, and keep low-quality camera feeds running at all times. One thing to remember is that your upload bandwidth is clearly limited for most residential cable setups. This is annoying, but (so far) not really a show stopper. 

Protection and Monitoring

Another useful set of services the UTM provides is the insight I get into my own core network activity. Seeing the various external access attempt (very commonly via TELNET or UPnP/SSDP… really?) is kinda fun, but it is also quite interesting to see the many port scan attempts, especially by nasty companies such as Facebook. 

The UTM also provide some very useful monitoring and traffic control tools: internal clients (identified by authenticated user, IP address, or hostname) can the monitored for the kind of connections that they are making with the outside world. This is quite useful for web applications and sites that could be interesting to children, but not necessarily appropriate. Recording the meta data (including amount of and time intervals of, say, video traffic) is helpful to have meaningful discussions about responsible use of network services. A web application filter (that is backed by a frequently updated service database) enables me to further restrict use of inappropriate sites or services. Using the internal PKI to terminate client TLS traffic at the UTM, I can also inspect most (non-pinned) traffic as it enters or leave the home enclave. 

Beyond this, the UTM also provider some basic IDS/IPS capabilities, which are not so useful to me since I have basically no ports open to the outside world. While the UTM does not support PCAP-ing traffic, I think it is possible to create a TAP via bridging. This could then be fed into a network monitoring tool such as a Security Onion, but I have not really gone down that path … yet. 

Network Considerations

My core network design is as simple as possible. This is a fact of where it developed from, as well as a desire to not spend a lot of money on special networking gear. The “core” switch that enables most user and internet facing traffic is a small 8 port NetGear box that – most importantly – supports 802.1q for virtualized network segmentation. Using VLAN (ie. 802.1q tagged packets) allow me to properly segment my network and protect the critical services appropriately. The core server network allows full connectivity between the different servers (both virtual hosts and guests) and the FreeNAS box, enabling a core service fabric. All end users are all attached via wifi networks. Given that Gigabit networking is now a commodity, every system fully support 1000 Mbps traffic and the NetGear box is also capable to support full duplex connections to the core servers. 

The Wifi AP is also a Sophos box, since it integrate easily with the UTM and allows me to define up to 8 independent networks over a single AP. Combined with the VLAN setup, this allows me to operate wifi networks for my regular users, and admin network that allows full access to the core server network, and IoT network for home automation and security cameras, and finally also a guest wifi network with a HotSpot portal. The latter eliminates the need to share your home wifi password with every guest that may come in for a few days and then leaves, significantly improving the security of your shared secret. Ultimately, I will likely convert this to a proper EAP-authentication, but this is still some time off. 

Leave a Reply

Your email address will not be published. Required fields are marked *