ZFS Encrypted Backups

By hernil

This post assumes you use sanoid and/or syncoid in some way.

ZFS supports native encrypted datasets which is neat. Although I don’t really need or want to use them locally as I don’t consider the added risk of failure or complicated recovery process worth it to safeguard from potential family photo leakage in the event of a break in. There is a use case where the encryption comes in very handy though and that is off site backups to another ZFS target. See the brief discussion about it on the ZFS discourse that Jim Salter set up.

So, because I can take the hit storage wise I decided to set up the following pipeline

dataset -> locally encrypted dataset -> remote encrypted dataset

This does result in double the space usage and it’s not much of a backup as it’s on the same mirrored pool but it allows me to replicate the data off site without ever exposing the encryption key to remote host.

So without further ado, here’s my pretty sparse notes on setting this up.

Encrypted dataset

First of all we need an encrypted container dataset under which we can put encrypted data.

 sudo zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase tank/backup/encrypted

Systemd service

syncoid-encrypted-photos.service looks like this:

[Unit]
Description=Runs syncoid 
Wants=syncoid-encrypted-photos.timer

[Service]
Type=oneshot
User=hernil
ExecStartPre=/usr/sbin/syncoid --source-bwlimit=80M --compress=none --no-sync-snap --no-privilege-elevation tank/photos tank/backup/encrypted/encrypted-photos
ExecStart=/usr/sbin/syncoid -r --no-sync-snap --sendoptions=w --target-bwlimit=4M --no-privilege-elevation --sshkey=/home/hernil/.ssh/id_ed25519 tank/backup/encrypted/encrypted-photos hernil@remote-host.com:tank/backup/hostname/encrypted-photos

[Install]
WantedBy=multi-user.target

Since our pipeline is two steps we separate each syncoid command in the service declaration (ExecStartPre and ExecStart).

I put in the --source-bwlimit param as the datasets are on the same disk pool and I’m in no hurry so no need to bottleneck any other operation on the pool by going full throthle on that replication. There’s also no need to compress a local transfer. Snapshots are managed by sanoid. And we followed the setup here to get this going without the need for priviliege elevation.

Then we set --sendoptions=w (maps to --raw) to send the data encrypted. This is what the docs have to say about the option

For encrypted datasets, send data exactly as it exists on disk. This allows backups to be taken even if encryption keys are not currently loaded. The backup may then be received on an untrusted machine since that machine will not have the encryption keys to read the protected data or alter it without being detected. Upon being received, the dataset will have the same encryption keys as it did on the send side, although the keylocation property will be defaulted to prompt if not otherwise provided. For unencrypted datasets, this flag will be equivalent to -Lec. Note that if you do not use this flag for sending encrypted datasets, data will be sent unencrypted and may be re-encrypted with a different encryption key on the receiving system, which will disable the ability to do a raw send to that system for incrementals.

We tell syncoid what ssh key to use, and lastly our target has limited bandwith - and again we’re not in a hurry - so we limit the target bwlimit.

Systemd timer

Then we have a syncoid-encrypted-photos.timer to run this on a schedule.

[Unit]
Description=Runs syncoid
Requires=syncoid-encrypted-photos.service

[Timer]
Unit=syncoid-encrypted-photos.service
OnCalendar=*-*-* *:50:00

[Install]
WantedBy=timers.target

Runs the replication every hour at ten to the hour.


Input or feedback to this content? Reply via email!
Related Articles