Complete Proxmox Backup Server Setup Guide: From Zero to Production-Ready Backups
For the best experience, ensure you have completed the “First Boot” guide.
Step 1: System Optimizations
Before starting, apply some system optimizations that are generally accepted as a good practice for PBS.
Reduce swap usage for better performance (PBS should use RAM):
echo “vm.swappiness=10” >> /etc/sysctl.confDisable IPv6 if your network doesn’t use it:
echo “net.ipv6.conf.all.disable_ipv6 = 1” >> /etc/sysctl.confecho “net.ipv6.conf.default.disable_ipv6 = 1” >> /etc/sysctl.confApply settings immediately:
sysctl -pStep 2: Storage Setup
Understanding Your Storage Options
PBS requires storage for two purposes:
Operating System and PBS software (~32GB minimum)
Backup datastore (where your actual backups are stored)
Option 1: Single Disk System
Using one SSD for both OS and backups.
Minimum Recommended Size: 256GB SSD (absolute minimum 128GB)
Pros:
Simpler setup
Less hardware required
Lower cost
Good for small deployments (5-10 VMs)
Cons:
OS disk failure means losing backups too
OS operations compete with backup I/O
Can’t easily expand storage
Harder to migrate to new hardware
Option 2: Dual Disk System
Separate disks for OS and backups (recommended).
Minimum Recommended Sizes:
OS Disk: 64-128GB SSD
Backup Disk: 500GB+ (depends on backup needs)
Pros:
OS can be reinstalled without affecting backups
Better performance (isolated I/O)
Easy to add/upgrade backup disk
Can move backup disk to new server
Follows backup best practices
Cons:
Requires additional hardware
Slightly more complex setup
Single Disk Setups
For a single disk setup simply create a directory and add it as a datastore.
mkdir -p /mnt/datastoreThen add it as a datastore
proxmox-backup-manager datastore create main /mnt/datastoreAlternatively you could create a dedicated partition, however that adds some complexity without major advantages.
Dedicated Backup Disk
Identify Your Disks
Whether you are using a single disk or two, it is important to understand how your disks are setup. The best way is to list out the block devices on your host
lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT,MODELIn my case, I have a dedicated OS disk + a dedicated back up disk, and my output looks like:
NAME SIZE TYPE FSTYPE MOUNTPOINT MODEL
nvme1n1 476.9G disk TWSC TSC3AN512E6-F2T60S
├─nvme1n1p1 1007K part
├─nvme1n1p2 1G part vfat /boot/efi
└─nvme1n1p3 475.9G part LVM2_member
├─pbs-swap 8G lvm swap [SWAP]
└─pbs-root 451.9G lvm ext4 /
nvme0n1 3.7T disk TEAM TM8FP4004TFrom this I can see that my operating system is located on nvme1n1. There are a few indicators, first the /boot/efi mount point is on partition 2 (p2) of the nvme1n1. I can also see the lvm filesystem on partition 3 (p3).
The disk I will be using is the 3.7T nvme0n1.
File System
This guide will be using ext4
Mature, stable, well-tested
Good performance for PBS workloads
Simple recovery if issues occur
No additional RAM requirements
The first step will be to wipe the system
I have intentionally omitted the device to prevent accidentally copy and pasting my device. Make sure to fill in your device
wipefs -a /dev/<YOUR DEVICE>Create a single partition using the entire disk
fdisk /dev/<YOUR DEVICE>Then in fdisk:
Press ‘g’ - create new GPT partition table
Press ‘n’ - new partition
Press Enter to create partition 1
Press Enter for the default first sector
Press Enter for the default last sector (uses whole disk)
Press ‘w’ - write and exit
You can verify that the new partition was created with
lsblk | grep <YOUR DEVICE>Format the partition (make sure to include the partition, like nvme0n1p1). I add the -m 0 to set reserved space to 0 as this is not needed
mkfs.ext4 -m 0 -T largefile -E lazy_itable_init=0,lazy_journal_init=0 /dev/<YOUR DEVICE><YOUR PARTITION>-m 0= No reserved space-T largefile= Optimize for fewer, larger files (PBS chunks)-E lazy_itable_init=0= Initialize now (better performance later)
Create the mount point
mkdir -p /mnt/datastoremount /dev/<YOUR DEVICE><YOUR PARTITION> /mnt/datastoreGet the UUID of the disk
blkid -o value -s UUID /dev/<YOUR DEVICE><PARTITION>Add permanent mount entry to fstab (replace UUID_HERE with output from above):
echo “UUID=UUID_HERE /mnt/datastore ext4 defaults,noatime 0 2” >> /etc/fstabTest the mount configuration. First unmount the drive
umount /mnt/datastoreThen mount it via the fstab entry
mount -aVerify
df -h /mnt/datastoreAfter mounting your storage, create the PBS datastore:
proxmox-backup-manager datastore create main /mnt/datastoreStep 3: Configure PBS
It is always a good idea to try and optimize when things run, here is the schedule I am following:
Optimized Job Schedule:
21:30 - Prune (frees space before backup)
22:00 - Backup (all nodes at once - your fast network can handle it)
01:00 - Verification (after backups complete)
02:00 - GC on Sundays only (weekly is sufficient)
Homelab-Optimized Retention:
3 most recent (quick recovery)
7 daily (one week)
4 weekly (one month)
2 monthly (two months)
0 yearly (unnecessary for homelab)
Configure Pruning
On PBS server - Set retention policy and schedule for automatic pruning:
proxmox-backup-manager prune-job create daily-prune \
--store main \
--schedule “21:30” \
--keep-last 3 \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 2Homelab-optimized retention:
--keep-last 3: Keep 3 most recent (for quick “oops” recovery)--keep-daily 7: Keep one per day for a week (most common recovery window)--keep-weekly 4: Keep weekly for a month (good balance)--keep-monthly 2: Two monthly backups (sufficient for homelab)--keep-yearly: This is omitted as we will do not yearly backups--prune-schedule “21:30”: Run 30 minutes before backups to free space
Configure Garbage Collection
On PBS server - Schedule garbage collection to reclaim space weekly:
proxmox-backup-manager datastore update main \
--gc-schedule “sun 02:00”Configure Verification
proxmox-backup-manager verify-job create verify-main \
--store main \
--schedule “01:00” \
--ignore-verified true \
--outdated-after 30verify-main: Job name--store main: Datastore to verify--schedule: Run daily at 01:00 (after backups complete)--ignore-verified: Skip recently verified backups--outdated-after 30: Re-verify backups older than 30 days
Create Backup User
On PBS server - Create a dedicated user for backup operations:
proxmox-backup-manager user create backup@pbs --password <YOUR PASSWORD>On PBS server - Generate an API token that will be used later (copy this):
proxmox-backup-manager user generate-token backup@pbs api-tokenOn PBS server - Add the DatastoreBackup role to the user
proxmox-backup-manager acl update /datastore/main DatastoreBackup \
--auth-id ‘backup@pbs’On PBS server - Grant backup permissions to the API token (write-only for security):
proxmox-backup-manager acl update /datastore/main DatastoreBackup \
--auth-id ‘backup@pbs!api-token’Note:
DatastoreBackuprole allows creating backups but not deleting them - this protects against compromised nodes deleting existing backups.Note: You MUST add the DatastoreBackup role to both the User and API token.
Step 4: Connect Proxmox VE
I am running a three node cluster and setting my backups at the datacenter level. Another option for this type of set up is creating a namespace for each node, and setting backups individually per-node. This is useful if you want to apply varying backup schedules depending on the VMs a node has.
On PBS server - Display the PBS certificate fingerprint needed for PVE (copy this):
proxmox-backup-manager cert info | grep FingerprintOn ANY single PVE node (e.g., PVE0) - This command adds PBS storage for the entire cluster:
pvesm add pbs pbs-backup \
--server 192.168.1.100 \
--username ‘backup@pbs!api-token’ \
--password ‘SECRET-TOKEN-STRING’ \
--datastore main \
--fingerprint ‘XX:XX:XX:XX:...’You only run this ONCE on ONE node - it automatically becomes available on all cluster nodes!
Parameters explained:
pbs-backup: Storage name in PVE (you can choose any name)--server: PBS server IP address--username: PBS user created earlier--datastore: PBS datastore name (we created “main”)--fingerprint: Certificate fingerprint from PBS
Datacenter-Level Backup (Recommended)
On ANY single PVE node - Create one backup job for all VMs on all nodes:
pvesh create /cluster/backup \
-id datacenter-backup \
-schedule “22:00” \
-storage pbs-backup \
-mode snapshot \
-all 1 \
-enabled 1This single job will:
Run every night at 22:00 (30 minutes after pruning at 21:30)
Backup ALL VMs and containers on ALL nodes in the cluster
Store everything in PBS’s main datastore
Show up in every node’s web UI under Datacenter → Backup
Parameters explained:
-id datacenter-backup: Job name (choose any name)-schedule “22:00”: When to run (optimized timing)-storage pbs-backup: PBS storage we added earlier-mode snapshot: Backup mode (no VM downtime)-all 1: Backup all VMs and containers on all nodes-enabled 1: Job is active
Note: The backup feature in Proxmox is extremely versatile. You can set backups at the datacenter level, node level, and even for specific VMs. Depending on the criticality of your VMs and network/hardware limitations, it may make more sense to only backup critical VM

