Discussion:
zpool geli encryption question
(too old to reply)
void
2023-10-15 13:39:16 UTC
Permalink
A machine periodically backs up bhyve volume-backed VMs like so:

# zfs send ssdzfs/fbsd140R | gzip -c > /vol-backups/$(date '+%G.%m.%d_%H:%M').fbsd140R.gz

This vm is zfs internally with geli encryption of both the fs and swap.

The same backup routine applies to an openbsd vm. It has its own way of
filesystem encryption.

Both volumes are 64GB in size. On the host, both volumes use lz4.

Surprisingly (to me at least), the freebsd backup results in a smaller
size of archive. The openbsd one results in a slightly larger archive than
its source.

I'm expecting both archives to be slightly larger than their sources,
because encrypted data is uncompressible.

The freebsd archive is 19GB. The openbsd one is 65GB. Why is this?
--
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Alan Somers
2023-10-15 14:17:57 UTC
Permalink
Post by void
# zfs send ssdzfs/fbsd140R | gzip -c > /vol-backups/$(date '+%G.%m.%d_%H:%M').fbsd140R.gz
This vm is zfs internally with geli encryption of both the fs and swap.
The same backup routine applies to an openbsd vm. It has its own way of
filesystem encryption.
Both volumes are 64GB in size. On the host, both volumes use lz4.
Surprisingly (to me at least), the freebsd backup results in a smaller
size of archive. The openbsd one results in a slightly larger archive than
its source.
I'm expecting both archives to be slightly larger than their sources,
because encrypted data is uncompressible.
The freebsd archive is 19GB. The openbsd one is 65GB. Why is this?
How much of the FreeBSD VM's disk is actually in-use? Maybe you are
using TRIM with FreeBSD, which punches holes in the host's ZFS
storage. That would explain why compression seems to save space, even
though the data is encrypted.
-Alan


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
void
2023-10-15 16:39:22 UTC
Permalink
Post by Alan Somers
How much of the FreeBSD VM's disk is actually in-use?
(in the example below, another vm instance, same observation)

from the host:

NAME USED AVAIL REFER MOUNTPOINT
ssdzfs/fbsd140Rv1 97.5G 309G 21.3G -

within the booted vm:

NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zroot 74.8G 9.97G 0B 96K 0B 9.97G
zroot/ROOT 74.8G 4.61G 0B 96K 0B 4.61G
zroot/ROOT/default 74.8G 4.61G 0B 4.61G 0B 0B
zroot/home 74.8G 59.6M 0B 59.6M 0B 0B
zroot/tmp 74.8G 120K 0B 120K 0B 0B
zroot/usr 74.8G 5.28G 0B 96K 0B 5.28G
zroot/usr/ports 74.8G 5.28G 0B 5.28G 0B 0B
zroot/usr/src 74.8G 96K 0B 96K 0B 0B
zroot/var 74.8G 1.17M 0B 96K 0B 1.08M
zroot/var/audit 74.8G 96K 0B 96K 0B 0B
zroot/var/crash 74.8G 96K 0B 96K 0B 0B
zroot/var/log 74.8G 564K 0B 564K 0B 0B
zroot/var/mail 74.8G 252K 0B 252K 0B 0B
zroot/var/tmp 74.8G 96K 0B 96K 0B 0B

gzipped archive:

-rw-r--r-- 1 root wheel 21G 15 Oct 16:39 2023.10.15_15:57.fbsd140Rv1.gz
Post by Alan Somers
Maybe you are using TRIM with FreeBSD, which punches holes in the host's ZFS
storage.
On the bhyve host (14.0-BETA3 #0 releng/14.0-n265111)

vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_max_active: 2
vfs.zfs.trim.queue_limit: 10
vfs.zfs.trim.txg_batch: 32
vfs.zfs.trim.metaslab_skip: 0
vfs.zfs.trim.extent_bytes_min: 32768
vfs.zfs.trim.extent_bytes_max: 134217728
vfs.zfs.l2arc.trim_ahead: 0
vfs.ffs.dotrimcons: 1

Does this mean trim is enabled and active on the host?
I didn't set it. Maybe it was automatically set because zfs knows the
hardware is SSD?
Post by Alan Somers
That would explain why compression seems to save space, even
though the data is encrypted.
That's really smart.

TYVM for the explainer.
--
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Alan Somers
2023-10-15 16:43:23 UTC
Permalink
Post by void
Post by Alan Somers
How much of the FreeBSD VM's disk is actually in-use?
(in the example below, another vm instance, same observation)
NAME USED AVAIL REFER MOUNTPOINT
ssdzfs/fbsd140Rv1 97.5G 309G 21.3G -
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zroot 74.8G 9.97G 0B 96K 0B 9.97G
zroot/ROOT 74.8G 4.61G 0B 96K 0B 4.61G
zroot/ROOT/default 74.8G 4.61G 0B 4.61G 0B 0B
zroot/home 74.8G 59.6M 0B 59.6M 0B 0B
zroot/tmp 74.8G 120K 0B 120K 0B 0B
zroot/usr 74.8G 5.28G 0B 96K 0B 5.28G
zroot/usr/ports 74.8G 5.28G 0B 5.28G 0B 0B
zroot/usr/src 74.8G 96K 0B 96K 0B 0B
zroot/var 74.8G 1.17M 0B 96K 0B 1.08M
zroot/var/audit 74.8G 96K 0B 96K 0B 0B
zroot/var/crash 74.8G 96K 0B 96K 0B 0B
zroot/var/log 74.8G 564K 0B 564K 0B 0B
zroot/var/mail 74.8G 252K 0B 252K 0B 0B
zroot/var/tmp 74.8G 96K 0B 96K 0B 0B
-rw-r--r-- 1 root wheel 21G 15 Oct 16:39 2023.10.15_15:57.fbsd140Rv1.gz
Post by Alan Somers
Maybe you are using TRIM with FreeBSD, which punches holes in the host's ZFS
storage.
On the bhyve host (14.0-BETA3 #0 releng/14.0-n265111)
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_max_active: 2
vfs.zfs.trim.queue_limit: 10
vfs.zfs.trim.txg_batch: 32
vfs.zfs.trim.metaslab_skip: 0
vfs.zfs.trim.extent_bytes_min: 32768
vfs.zfs.trim.extent_bytes_max: 134217728
vfs.zfs.l2arc.trim_ahead: 0
vfs.ffs.dotrimcons: 1
Does this mean trim is enabled and active on the host?
I didn't set it. Maybe it was automatically set because zfs knows the
hardware is SSD?
Within the VM, do "zpool get autotrim zroot" to see if it's set. You
can also manually trim with "zpool trim zroot" if you don't want to
use the autotrim setting.

Note that even without trim, it's possible that there are LBAs which
the VM simply has never written to. That could also explain the low
space usage on the host.
Post by void
Post by Alan Somers
That would explain why compression seems to save space, even
though the data is encrypted.
That's really smart.
TYVM for the explainer.
--
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...