treeru.com

Running a 16-server infrastructure, the scariest moment is data loss. RAID survives disk failures but can't stop ransomware or accidental deletion. You need a dedicated backup server that periodically replicates your data. Here's how we built an NFS cold backup server with a low-power CPU and two IronWolf 12TB drives.

24TB

Total Raw Capacity

12TB

RAID 1 Usable

~15W

Avg Power Draw

Daily 2AM

Auto Backup

1Why a Cold Backup Server

The core backup principle is the 3-2-1 strategy: 3 copies of data, on 2 different media types, with 1 copy offsite. An NFS cold backup server handles the critical "separate copy on separate media" requirement. Combined with a 3-tier storage strategy, it forms a comprehensive data protection system.

Backup MethodProsConsCost (12TB)
Commercial NASGUI management, app ecosystemExpensive, vendor lock-in$600–$1,100
Cloud BackupOffsite, unlimited scalingMonthly fees, recovery time$25–$75/month
Self-built NFSFull customization, low costManual setup required$300–$450 (one-time)

Why NFS

For Linux-to-Linux file sharing, NFS has the lowest protocol overhead. It's faster than SMB/CIFS and simpler to configure than iSCSI. When backup and production servers share the same LAN, NFS is the optimal choice.

2Hardware Configuration

ComponentModelNotes
CPULow-power (N100-class)High performance unnecessary for backup
RAM32GB DDR4Generous for NFS caching
HDDSeagate IronWolf 12TB x 2NAS/server-grade, designed for 24/7
NIC2.5GbE x 2 portsBusiness + backup traffic separation
RAIDSoftware RAID 1 (mdadm)Data preserved if 1 disk fails
PowerIdle ~10W / Backup ~20W~$2/month electricity

Why IronWolf

IronWolf is Seagate's NAS/server HDD line. Firmware optimized for 24/7 operation, built-in vibration sensors, 180TB/year workload rating — significantly more durable than desktop drives in server environments. The 12TB model hits the price-per-TB sweet spot.

3NFS Server Setup

RAID 1 Configuration

# Verify disks
lsblk
# sda: 12TB (IronWolf #1)
# sdb: 12TB (IronWolf #2)

# Create RAID 1 (mirror)
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

# Create filesystem
sudo mkfs.ext4 /dev/md0

# Mount
sudo mkdir -p /backup
sudo mount /dev/md0 /backup

# Persist in /etc/fstab
echo '/dev/md0 /backup ext4 defaults 0 2' | sudo tee -a /etc/fstab

# Save RAID config
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

NFS Server Installation

# Install NFS server
sudo apt install nfs-kernel-server

# Create share directories
sudo mkdir -p /backup/server-{a,b,c}

# Configure /etc/exports
# Allow access only from server subnet
echo '/backup/server-a 10.x.10.0/24(rw,sync,no_subtree_check,root_squash)' | sudo tee -a /etc/exports

# Apply and start
sudo exportfs -ra
sudo systemctl enable --now nfs-kernel-server

# Verify shares
showmount -e localhost

4Client Auto-Mount

# On each production server (client)
sudo apt install nfs-common

sudo mkdir -p /mnt/backup
sudo mount -t nfs 10.x.10.20:/backup/server-a /mnt/backup

# Verify
df -h /mnt/backup

# Persist in /etc/fstab
echo '10.x.10.20:/backup/server-a /mnt/backup nfs defaults,_netdev,soft,timeo=150 0 0' \
  | sudo tee -a /etc/fstab

# _netdev: mount after network is ready
# soft: timeout on NFS server down (hard = wait forever)
# timeo=150: 15-second timeout

soft vs hard mount

soft returns an error after timeout if the NFS server is unresponsive. hard waits indefinitely until the server responds. For backup purposes, soft is preferred — it prevents the production server from hanging if the backup server goes down.

5rsync + cron Automated Backup

rsync performs incremental backups — only changed files are transferred. After the initial full copy, subsequent runs complete quickly.

Backup Script

#!/bin/bash
# /usr/local/bin/backup.sh

BACKUP_DIR="/mnt/backup"
LOG_FILE="/var/log/backup.log"
DATE=$(date '+%Y-%m-%d %H:%M:%S')

# Verify NFS mount
if ! mountpoint -q $BACKUP_DIR; then
    echo "[$DATE] ERROR: NFS not mounted" >> $LOG_FILE
    exit 1
fi

# Backup critical directories
rsync -avz --delete \
    --exclude='.cache' \
    --exclude='node_modules' \
    --exclude='__pycache__' \
    /home/ $BACKUP_DIR/home/ >> $LOG_FILE 2>&1

rsync -avz --delete /etc/ $BACKUP_DIR/etc/ >> $LOG_FILE 2>&1
rsync -avz --delete /var/lib/docker/volumes/ $BACKUP_DIR/docker-volumes/ >> $LOG_FILE 2>&1

# Database dump + backup
pg_dumpall -U postgres > /tmp/db_backup.sql 2>/dev/null
if [ -f /tmp/db_backup.sql ]; then
    rsync -avz /tmp/db_backup.sql $BACKUP_DIR/db/ >> $LOG_FILE 2>&1
    rm /tmp/db_backup.sql
fi

echo "[$DATE] Backup completed" >> $LOG_FILE

cron Schedule

sudo chmod +x /usr/local/bin/backup.sh

# Add to crontab (daily at 2 AM)
sudo crontab -e
# Add: 0 2 * * * /usr/local/bin/backup.sh

rsync --delete Warning

--delete removes files from the backup that were deleted on the source. If you accidentally delete a file, the next backup run will remove it from the backup too. Consider adding daily snapshot directories or using the --backup option.

6Dual NIC Traffic Isolation

Large backup transfers can saturate the business network. By equipping the backup server with two NICs, we dedicate one to business traffic and the other exclusively to backup.

NICPurposeSubnetVLAN
eth0 (2.5GbE)Business network10.x.10.0/24VLAN 10
eth1 (2.5GbE)Backup only10.x.50.0/24VLAN 50
# Backup server netplan (/etc/netplan/01-config.yaml)
network:
  ethernets:
    eth0:
      addresses: [10.x.10.20/24]
      routes:
        - to: default
          via: 10.x.10.1
    eth1:
      addresses: [10.x.50.20/24]
      # No default gateway on backup NIC

# Mount NFS via backup-dedicated IP
sudo mount -t nfs 10.x.50.20:/backup/server-a /mnt/backup

Traffic Isolation Effect

Backup traffic flows through a dedicated NIC and subnet. Even when multi-TB backups run at 2 AM, the business network (VLAN 10) experiences zero impact.

7Backup Monitoring

HDD Health (SMART)

sudo apt install smartmontools
sudo smartctl -a /dev/sda

# Key attributes:
# Reallocated_Sector_Ct: reallocated sectors (must be 0)
# Current_Pending_Sector: pending bad sectors (must be 0)
# Temperature_Celsius: drive temp (under 40°C recommended)

# RAID status
cat /proc/mdstat
# md0 : active raid1 sda[0] sdb[1]
# [UU] ← both healthy

Backup Verification Script

#!/bin/bash
# /usr/local/bin/backup-check.sh

LOG="/var/log/backup.log"
TODAY=$(date '+%Y-%m-%d')

# Check today's backup completion
if grep -q "$TODAY.*Backup completed" $LOG; then
    echo "OK: Backup completed"
else
    echo "WARNING: Today's backup not completed!"
fi

# Disk usage check
USAGE=$(df -h /backup | tail -1 | awk '{print $5}' | tr -d '%')
if [ "$USAGE" -gt 85 ]; then
    echo "WARNING: Backup disk at $USAGE%"
fi

# RAID status check
if ! grep -q "\[UU\]" /proc/mdstat; then
    echo "CRITICAL: RAID disk failure detected!"
fi

Unverified Backups Are Worthless

"We had backups but couldn't restore" is the worst-case scenario. Run quarterly restore tests — even pulling a few files from backup and comparing them to the originals is enough to validate your recovery process.

Summary

NFS Cold Backup Server Checklist

  • Low-power CPU + 32GB RAM + IronWolf 12TB x 2 in RAID 1
  • mdadm software RAID 1 — data preserved on single disk failure
  • NFS server with /etc/exports configured for server subnet only
  • Client auto-mount via /etc/fstab with soft option
  • rsync + cron for daily automated incremental backups
  • Dual NIC to isolate backup traffic from business network
  • SMART monitoring with smartctl for proactive disk replacement
  • Quarterly restore tests to validate actual recovery capability

Data backup is like insurance — it feels like an expense until disaster strikes, then it becomes the most valuable investment you've ever made. Two IronWolf 12TB drives and a low-power CPU give you a 12TB backup system that runs on roughly $2/month in electricity.

Based on real-world experience operating an NFS backup server in an office environment. IP addresses and directory paths are examples and must be adapted to your environment. RAID does not replace backups — both are necessary independently. Non-commercial sharing is welcome. For commercial use, please contact us.

Need a Data Backup System?

Treeru handles NFS/NAS backup server builds, automated backup configuration, and disaster recovery planning. Protect your critical data.

Get Backup Consultation