Building a Resilient Homelab Storage Solution with TrueNAS

Cloud & AI Architect. Building Agentic systems. Runs a 24x7 self-hosted homelab dungeon.
Welcome, fellow homelab enthusiasts! If you're like me, you've probably spent countless hours building, tweaking, and perfecting your home infrastructure. But there's one aspect of the homelab that's absolutely critical: storage. In this post, I'm going to walk you through my personal storage solution, a setup that's powerful, resilient, and surprisingly accessible.
My goal is to show you that you don't need a massive budget or a data center to build a rock-solid storage foundation for your homelab. We'll explore how I use TrueNAS SCALE virtualized on Proxmox, with a few tricks up my sleeve to ensure my data is safe and sound.
The Stakes: This Is Not a Drill
Before we dive deep into the nitty-gritty of my setup, let's get one thing straight: this is not just a weekend proof-of-concept. This is a production system, and my real, irreplaceable data is on the line. My friends and family rely on the services I host—for free, of course!—and those services store critical data on my TrueNAS servers.
The stakes are high. You don't want to be in a situation where you have to explain to your partner why the family photos are suddenly gone, or why their Time Machine backup has vanished into thin air. We've all seen how data footprints are exploding; modern phones record 4K videos where a 30-second clip can run into hundreds of megabytes. Cloud storage gets expensive, fast, and the more your data grows, the more you pay.
The Philosophy: The 3-2-1 Backup Strategy and Why Redundancy is Not a Backup
Before we dive into the technical details, let's talk about the "why." For any serious data hoarder, the 3-2-1 backup strategy is non-negotiable. It's simple:
3 copies of your data.
2 different types of media.
1 copy off-site.
This strategy ensures that you're protected from a wide range of failure scenarios, from a single drive dying to a catastrophic event at your primary location.
It's also crucial to understand a fundamental concept: RAID arrays are fantastic for high availability, but they won't save you from file corruption, accidental deletion, or a ransomware attack. A true backup is a separate, versioned copy of your data that you can restore in case of disaster.
The Architecture: A Tale of Two Sites
My setup spans two physical locations, providing true off-site backup capabilities. Both sites run Proxmox as the hypervisor, with TrueNAS SCALE running in a virtual machine.
Primary Site: Darkshield
At my primary site, a server named "Darkshield" hosts the main TrueNAS instance. The magic here is PCIe passthrough. I've passed an entire Intel SATA controller directly to the TrueNAS VM. This gives TrueNAS raw, direct access to the drives, which is essential for ZFS to work its magic.
- Learn More: Proxmox PCIe Passthrough
ZFS Configuration on Darkshield
Here's how my storage is organized on the primary TrueNAS instance:
Pool 1 (RAIDZ1): This is my main storage pool, a 4-drive RAIDZ1 array. RAIDZ1 is similar to RAID 5; it can tolerate the failure of a single drive without data loss. This gives me a great balance of performance, storage capacity, and redundancy.
Pool 2 (Single 8TB HDD): This drive is dedicated to local backups of my most critical datasets. It's a low-RPM (5400) drive with Smart Spindown enabled, which means it's power-efficient and quiet. Having a local backup means I can restore data much faster than pulling it from the off-site location over the network.
The Importance of Rapid Recovery
It's one thing to have an off-site backup; it's another thing entirely to restore from it. An off-site backup is your ultimate safety net against a disaster like a fire or theft, but it's often not a practical solution for a quick recovery. The bottleneck is almost always the internet connection.
Let's do some quick math. Imagine a worst-case scenario where you lose 20TB of data on your primary storage. You have a full backup at your off-site location, but you need to pull it back over a residential internet connection. While gigabit connections are becoming more common, sustained speeds can vary significantly due to network congestion and ISP throttling. For a realistic estimate, let's consider a common average sustained speed.
Data to Restore:
20 TBAverage Internet Speed:
300 Mbps(which is approximately37.5 MB/s)Calculation:
20 TBis20,480,000 MB. So,20,480,000 MB / 37.5 MB/s = 546,133 seconds.Time to Restore: That's approximately 6.3 days of continuous downloading.
This is why having a local backup is a game-changer. I can restore terabytes of data over my local gigabit network in a matter of hours, not weeks.
Secondary Site: StarkAI
My secondary site features a server named "StarkAI," which also runs a TrueNAS VM. Its primary purpose is to receive backups from Darkshield. The two sites are connected via an encrypted WireGuard VPN tunnel, ensuring that all data transferred between them is secure.
Nightly rsync jobs automatically copy critical datasets from Darkshield to StarkAI, giving me a complete, off-site backup.
ZFS for Humans
If you're new to ZFS, it can seem intimidating. But the core concepts are quite straightforward:
ZFS: It's a combined file system and logical volume manager. Think of it as a super-powered file system that handles everything from data integrity to snapshots and RAID-like functionality.
vdevs (Virtual Devices): These are the building blocks of a ZFS pool. A vdev can be a single drive, a mirror (like RAID 1), or a RAIDZ array.
Cache (L2ARC): An optional, fast SSD that ZFS can use to cache frequently read data, speeding up read performance.
Logs (ZIL): A dedicated, fast drive (like an NVMe SSD) that ZFS can use to speed up synchronous writes.
The beauty of ZFS is that it's incredibly resilient. It's a copy-on-write file system, which means it's resistant to data corruption. And it has built-in tools for creating snapshots and replicating data.
Service Integration: The "Why" of the Homelab
So, what do I do with all this storage? The possibilities are endless! I create different datasets in TrueNAS and expose them to my other services via SMB or NFS:
Immich: My self-hosted photo management solution.
The *Arr Stack: For all my media management needs.
Nextcloud: My personal cloud for files, contacts, and calendars.
Container Configs: A central location to store configurations for my Docker and Kubernetes containers.
Proxmox Backup Server: I even expose a dataset as a storage target for Proxmox Backup Server, so I can back up my VMs and containers.
Time Machine: My wife's MacBook backs up seamlessly to a dedicated dataset.
The best part? I don't have to worry about managing backups for each individual service. I just create a dataset, expose it, and TrueNAS handles the rest.
Maintenance and Accountability
With great power comes great responsibility. Running your own storage solution means you're in charge of keeping it healthy. I have regular scrubbing tasks scheduled on both TrueNAS instances to check for data integrity. And I take frequent snapshots, which are read-only copies of my datasets that I can roll back to in an instant.
This setup has evolved beyond a simple proof of concept. It's now a "production" system for my digital life. If it goes down, I risk losing critical data. It's a sobering thought, but it's also a powerful motivator to do things right.
Conclusion: Why Build When You Can Buy?
So, why go to all this trouble when you could just buy a pre-built NAS from a company like Synology or QNAP? For me, it comes down to two things: control and hardware utilization.
A custom-built server allows me to run a hypervisor like Proxmox, which means I can run my storage solution alongside other VMs and containers. A pre-built NAS is often a closed box with limited (often inferior) hardware and software capabilities.
Building your own storage solution is a journey, not a destination. It's a chance to learn, to experiment, and to create something that's uniquely yours. If you're looking for a project that will challenge you and reward you in equal measure, I can't recommend it enough.



