Daemon Security Blog RSS Feed

[12/29/2014] If you upgraded to FreeBSD 10.1 from 10.0 with ZFS, make sure you upgrade your zpools

Depending on the way you perform upgrades (freebsd-update or building from source) you may be interested in features that were added in the 10.1 release of FreeBSD for ZFS. The following options were added with the latest stable release of FreeBSD: spacemap_histogram
This features allows ZFS to maintain more information about how free space is organized within the pool

enabled_txg
Once this feature is enabled ZFS records the transaction group number in which new features are enabled.

hole_birth
This feature improves performance of incremental sends (zfs send -i'') and receives for objects with many holes. The most common case of hole-filled objects is zvols.

extensible_dataset
This feature allows more flexible use of internal ZFS data structures, and exists for other features to depend on.

embedded_data
This feature improves the performance and compression ratio of highly-compressible blocks. Blocks whose contents can compress to 112 bytes or smaller can take advantage of this feature

bookmarks
This feature enables use of the zfs bookmark subcommand.

filesystem_limits
This feature enables filesystem and snapshot limits.
You can validate whether your zpool can be upgraded by running "zpool status" and observing the following output:
status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details.
You can view zpool-features(7) at the FreeBSD site: https://www.freebsd.org/cgi/man.cgi?query=zpool-features&sektion=7&manpath=FreeBSD+10.1-stable
It is important to note that any software that does not support these features may have issues once you run the upgrade. If you are certain that you will not have any issues, you can simply upgrade your zpools by running the following command (in this example, I am upgrading my bootpool):# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
(This was required before I could run zpool upgrade)
# zpool upgrade bootpool
This system supports ZFS pool feature flags.

Enabled the following features on 'bootpool':
spacemap_histogram
enabled_txg
hole_birth
extensible_dataset
embedded_data
bookmarks
filesystem_limits

Bookmarks and filesystem_limits will be useful features for managing your ZFS datasets.

Share This



[09/17/2014] ZFS Corruption: Postmortem

Are you performing backups of your filesystems? A recent blog post described the process for using remote snapshots with ZFS to ensure data is backed up. This post describes an incident where data was almost lost on a ZFS filesystem due to a corrupted pool (Are you backing up your filesystems yet?).

A VM running within VirtualBox only had a single VirtualDisk with ZFS as its filesystem. One day, while using the VM, the power went off on the host operating system. ZFS, with its Copy-On-Write functionality should be resistant to this type of sudden power-loss, but in this case, something happened to the VirtualDisk provided by VirtualBox. One thing that can cause serious issues for any filesystem is hardware faults (though technically it was a software fault). When trying to boot up, FreeBSD was unable to mount root on ZFS. If this happens, you will be dropped to a boot prompt:mountroot>
This points to an issue with the ZFS pool, which caused FreeBSD to not be able to mount it on startup. The first step was to grab a FreeBSD 10 disc, and boot up and select "Live CD", and login as root, with no password. Once you are logged into the Live CD, run the following command to list the available storage pools:# zpool importFor a default ZFS install, this should show the "bootpool" in good status. Because encryption was used (Root on ZFS with GELI Encryption), bootpool is required to mount the corrupted pool. Run the following commands to make a mount point for the bootpool:# mkdir -p /tmp/bootpool
# zpool import -fR /tmp/bootpool bootpool
Run the following to decrypt the GELI partition and allow access to the pool (in this example, the /dev/ada0p4 device):# geli attach -k /tmp/bootpool/bootpool/boot/encryption.key /dev/ada0p4
Enter passphrase:
# zpool import
There may be some GEOM_ELI console messages that appear, but the damaged pool should be listed. However, the status for the pool may be DEGRADED or worse:cannot import 'tank': I/O error
Destroy and re-create the pool from
a backup source
.
At this point, try to import the corrupted pool (assume going forward that the pool is named tank):# mkdir -p /tmp/tank
# zpool import -f -R /tmp/tank tank
If this operation fails, you may see the following error: Solaris: WARNING: can't open objset for tank/tmp
cannot import 'tank': I/O error
Destroy and re-create the pool from
a backup source
In searching for a solution, there is a another interesting flag for importing that will try to fix errors in the dataset, discarding transactions as it goes backward (Note: This will attempt to get back non-corrupted data. It is worthwhile, only if you do not have a clean backup of your datasets):# zpool import -fFX -R /tmp/tank tankWith roughly 150GB of storage, the import with the "-X" flag took several hours to complete. However, once it runs its course, you should be able to scrub the pool and mount/get access to the non-corrupted data:# zpool scrub tank The pool may be ONLINE, but will display output similar to the following when files are corrupted: errors: Permanent errors have been detected in the following files:
/important/1.txt
/important/will.txt
The best approach is to use one of the many examples for keeping a copies of snapshots on a separate device, removable media, or offsite. This ensure that no matter what happens, you have a recent copy of your data that can imported and restored.

Share This



[08/05/2014] Simple ZFS Backup Script

ZFS is a powerful filesystem that helps to maintain integrity by avoiding data corruption. A useful feature of ZFS is its ability to clone filesystems. Creating snapshots allows for filesystems to be cloned and restored if anything happens to the original data. Going beyond this is the ability to maintain incremental changes between snapshots. There are a number of scripts available that setup a similar backup system, but the idea here is to maintain a current dataset, with the ability to restore from two previous backups.

The first step is to setup a backup system, or backup drive to use for the ZFS snapshots. In this setup, there is a separate remote FreeBSD system where the snapshots will be stored. This remote system has an encrypted ZFS filesystem (AES XTS with geli on boot), which provides a secure backup of the data. The root account on the local system is setup with an SSH key and this is deployed to the remote system:# ssh-copy-id root@remote
Note: ssh-copy-id is available by default in FreeBSD 10. For older versions of FreeBSD, the port can be installed from /usr/ports/security/ssh-copy-id
Now the root account can run the script without a password over a secure connection. If you do not wish to use the root account, the steps here can be applied to another user account on the remote system with permissions to perform ZFS operations. See the ZFS man page for more details about permissions.
In this test steup, the remote machine has the same size hard disk as the local system. The first operation is to snapshot the filesystem and use zfs send to replicate the snapshot over to the remote system.# zfs snapshot -r tank/usr/home@today
# zfs send -R tank/usr/home@today | ssh 192.168.56.102 zfs recv -F zroot/homeback
Once the snapshot has been sent over to the remote system (this can take some time), validate that the snapshot is available on the remote system.# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/homeback@today 49.3M - 155G -
Now there is a snapshot of the home filesystem on the local system, and the remote system. Any changes made after the snapshot is created are recorded by ZFS. This is why snapshots though useful must be carefully managed as they can grow to be quite large. From the zfs man page "As data within the active dataset changes, the snapshot consumes more data than would otherwise be shared with the active dataset."# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
tank/usr/home@today 102.2M - 155G -
A way to work with changes to the active dataset is to create incremental snapshots. This is a feature in ZFS that allows for the differences between snapshots to be stored. In this test setup, the incremental differences are sent to the remote system after creating a new snapshot. The first step is to rename the last snapshot (today) to yesterday. (For the script, it destroys the yesterday snapshot if it exists on both systems.)# ssh 192.168.56.102 zfs destroy zroot/homeback@yesterday
# zfs destroy tank/usr/home@yesterday
# zfs rename -r tank/usr/home@today @yesterday
# ssh 192.168.56.102 zfs rename -r zroot/homeback@today @yesterday
Now a snapshot is taken of the home filesystem for today and the incremental snapshot is sent to the remote system.# zfs snapshot -r tank/usr/home@today
# zfs send -R -i tank/usr/home@yesterady tank/usr/home@today | ssh 192.168.6.102 zfs recv -F zroot/homeback
Now snapshots are available for the data on the local system, and if anything happens to the local system, the remote system has the ability to recover the data, or rebuild the home dataset on the local system. There is a great example of how to extend this daily operation to store snapshots over the course of seven days. This is Example 15 from the ZFS man page.The following example shows how to maintain a history of snapshots with a consistent naming scheme. To keep a week's worth of snapshots, the user destroys the oldest snapshot, renames the remaining snapshots, and then creates a new snapshot, as follows:
# zfs destroy -r pool/users@7daysago
# zfs rename -r pool/users@6daysago @7daysago
# zfs rename -r pool/users@5daysago @6daysago
# zfs rename -r pool/users@4daysago @5daysago
# zfs rename -r pool/users@3daysago @4daysago
# zfs rename -r pool/users@2daysago @3daysago
# zfs rename -r pool/users@yesterday @2daysago
# zfs rename -r pool/users@today @yesterday
# zfs snapshot -r pool/users@today
The simple script is available on github: zfsbackup.
Once the variables are set for the local/remote systems (including the local and remote datasets), a cron job can be setup to run a backup daily, or weekly to maintain snapshots.

Share This



[03/27/2014] Full Featured FreeBSD Desktop in 5 minutes.

The FreeBSD community has been very excited about pkgng, the next generation package manager now installed by default with FreeBSD 10. pkgng allows for binary packages to be installed in a similar fashion to yum or apt-get on Linux. The most important feature of binary packages is the speed at which a system can be deployed. pkgng allows for the creation of custom repos that can be configured with pkg.conf files.

It should be noted that PC-BSD provides the most complete desktop installation with numerous configuration tools and features for novice and advanced users, including remote backups with Life Preserver and simplified jails management with Warden. The idea here is to provide a minimal but functional desktop with FreeBSD 10 in just a couple commands, as sometimes it is necessary to have a running web browser to be able to configure a device or virtual machine (pfSense for example). These steps assume a basic install of FreeBSD 10 with the ports tree.

The first step is to update/install pkgng. The first time pkgng is run, it will need to be bootstrapped:pkg update -f
Type "yes" and press Enter to setup pkgng which will also update pkgng to the latest version.
The following will install the minimum required Xorg server packages that will allow for the X server to start:pkg install -y xorg-server xinit xf86-input-keyboard xf86-input-mouse Depending on your preference, fluxbox and i3wm are minimalist window managers for FreeBSD. In this example, I installed i3 to be the window manager:pkg install -y i3 i3lock i3statusRunning the next set of commands will set the default window manager for all users on the system (except root, as root should not be running X):foreach dir (`ls /usr/home`)
echo "/usr/local/bin/i3" >> /usr/home/$dir/.xinitrc
chown $dir /usr/home/$dir/.xinitrc
end
If this desktop is running as a guest OS in VirtualBox, run the following commands to install the VirtualBox drivers and start the vbox services on boot: pkg install -y virtualbox-ose-additions
cat<< EOF >> /etc/rc.conf
vboxguest_enable="YES"
vboxservice_enable="YES"
EOF
If you are not running within VirtualBox, you can install the basic failsafe drivers that include VESA to setup a normal VGA desktop: pkg install -y xorg-driversAt this point, I add the bare minimum of software I will need. LibreOffice could be omitted but the following is a short list of useful software I add:pkg install -y rxvt-unicode zsh sudo chromium tmux libreoffice gnupg pinentry-cursesWhen using chromium, the sem module must be loaded in the kernel. The linux compatibility module is also set to load at boot with the following commands: echo 'sem_load="YES"' >> /boot/loader.conf
echo 'linux_load="YES"' >> /boot/loader.conf
hald and dbus are required for running X, so they must be enabled in rc.conf: cat << EOF >> /etc/rc.conf
hald_enable="YES"
dbus_enable="YES"
EOF
There is a sysctl value that is required when running chromium. The following will add this value to sysctl.conf: cat << EOF >> /etc/sysctl.conf
#Required for chrome
kern.ipc.shm_allow_removed=1
EOF
The last thing to do is reboot to load up all of the necessary modules. Running "startx" as any user will load up the i3 desktop.
I have created a script to build the desktop called the The 5 Minute FreeBSD Desktop, which is hosted over on GitHub. Feel free to edit the script to add in whatever software you wish.

The next step is to repackage the FreeBSD install disk with this script to run on first boot.

Share This



[01/03/2014] Updates for the Daemon Security Blog and website

An archive has been created to save older blog postings in an effort to provide updated content for current initiatives. The Snorby installation script has been placed on github where development will be tracked. New blog postings will be coming in the next few weeks including the steps for building Bro IDS on OpenBSD and other BSD configuration options.

Share This