OpenBSD FAQ - Disk Setup [FAQ Index]

Disks and partitions

The details of setting up disks in OpenBSD vary between platforms, so you should read the installation instructions in the INSTALL.<arch> file for your platform to determine the specifics for your system.

Drive identification

OpenBSD handles mass storage with two drivers on most platforms, depending upon the normal command set that kind of device supports: The first drive of a particular type identified by OpenBSD will be drive 0, the second will be 1, etc. So, the first IDE-like disk will be wd0, the third SCSI-like disk will be sd2. If you have two SCSI-like drives and three IDE-like drives on a system, you would have sd0, sd1, wd0, wd1 and wd2 on that machine. The order is based on the order they are found during hardware discovery at boot. There are a few key points to keep in mind:


Due to historical reasons, the term "partition" is regularly used for two different things in OpenBSD: All OpenBSD platforms use disklabel(8) as the primary way to manage OpenBSD filesystem partitions. Some platforms also require using fdisk(8) to manage MBR partitions. On the platforms that use them, one fdisk partition is used to hold all of the OpenBSD file systems. This partition is then sliced up into 16 disklabel partitions, labeled a through p. A few of these are special:

Partition identification

An OpenBSD filesystem is identified by the disk it is on, plus the disklabel partition on that disk. So, file systems may be identified by identifiers like sd0a (the a partition of the first sd device), wd2h (the h partition of the third wd device), or sd1c (the entire second sd device). The corresponding device files would be /dev/sd0a for the block device and /dev/rsd0a for the raw (character) device, etc. Remembering whether a rarely used command needs a block or a character device is difficult. Therefore, many commands make use of the opendev(3) function, which automatically expands sd0 to /dev/rsd0c or wd0a to /dev/wd0a, as appropriate.

If you put data on wd2d, then later remove wd1 from the system and reboot, your data is now on wd1d, as your old wd2 is now wd1. However, a drive's identification won't change after boot, so if a USB drive is unplugged or fails, it won't change the identification of other drives until reboot.

Disklabel Unique Identifiers

Disks can also be identified by Disklabel Unique Identifiers (DUIDs), a 16 hex digit number, managed by the diskmap(4) device. This number is a random number generated when a disklabel is first created. These UIDs are persistent -- if you identify your disks this way, drive f18e359c8fa2522b will always be f18e359c8fa2522b, no matter what order or how it is attached. You can specify partitions on the disk by appending a period and the partition letter, for example, f18e359c8fa2522b.d is the d partition of the disk f18e359c8fa2522b and will always refer to the same chunk of storage, no matter what order the device is attached to the system, or what kind of interface it is attached to.

Using OpenBSD's fdisk(8)

Be sure to check the fdisk(8) man page.

fdisk(8) is used on some platforms (i386, amd64, macppc, zaurus and armish) to create a partition recognized by the system boot ROM, into which the OpenBSD disklabel partitions can be placed. Unlike the fdisk-like programs on some other operating systems, OpenBSD's fdisk(8) assumes you know what you want to do.

Normally, only one OpenBSD fdisk partition will be placed on a disk. That partition will be subdivided by disklabel into OpenBSD filesystem partitions.

To just view your partition table using fdisk, use:

$ fdisk sd0
Which will give an output similar to this:
Disk: sd0       geometry: 553/255/63 [8883945 Sectors]
Offset: 0       Signature: 0xAA55
         Starting       Ending       LBA Info:
 #: id    C   H  S -    C   H  S [       start:      size   ]
*0: A6    3   0  1 -  552 254 63 [       48195:     8835750 ] OpenBSD
 1: 12    0   1  1 -    2 254 63 [          63:       48132 ] Compaq Diag.
 2: 00    0   0  0 -    0   0  0 [           0:           0 ] unused
 3: 00    0   0  0 -    0   0  0 [           0:           0 ] unused
In this example, we are viewing the fdisk output of the first SCSI-like drive. We can see the OpenBSD partition (id A6) and its size. The * tells us that the OpenBSD partition is the bootable partition.

Edit the partition table with the -e flag:

# fdisk -e sd0
Enter 'help' for information
fdisk: 1>

fdisk tricks and tips

Using OpenBSD's disklabel(8)

What is disklabel(8)?

First, be sure to read the disklabel(8) man page.

The details of setting up disks in OpenBSD varies somewhat between platforms. For i386, amd64, macppc, zaurus, and armish, disk setup is done in two stages: First, the OpenBSD slice of the hard disk is defined using fdisk(8), then that slice is subdivided into OpenBSD partitions using disklabel(8).

All OpenBSD platforms, however, use disklabel(8) as the primary way to manage OpenBSD partitions. Labels hold certain information about your disk, like your drive geometry and information about the filesystems on the disk. The disklabel is then used by the bootstrap program to access the drive and to know where filesystems are contained on the drive. You can read more in-depth information about disklabel in the disklabel(5) man page.

On some platforms, disklabel helps overcome architecture limitations on disk partitioning. For example, on i386, you can have four primary partitions. With disklabel(8), you use one of these primary partitions to store all of your OpenBSD partitions, and you will still have 3 more partitions available for other OSs.

disklabel(8) during OpenBSD's install

By default, the installer will allocate disklabels automatically.

Disklabel basics

Miscellaneous disklabel tidbits

Recovering partitions after deleting the disklabel

If you have a damaged partition table, there are various things you can attempt to do to recover it.

A copy of the disklabel for each disk is saved in /var/backups as part of the daily system maintenance. Assuming you still have the /var partition, you can simply read the output, and put it back into disklabel.

In the event that you can no longer see that partition, there are two options. Fix enough of the disk so you can see it, or fix enough of the disk so that you can get your data off.

The first tool you need is scan_ffs(8) which will look through a disk, and try and find partitions. It will also tell you what information it finds about them. You can use this information to recreate the disklabel. If you just want /var back, you can recreate the partition for /var, and then recover the backed up label and add the rest from that.

disklabel(8) will update both the kernel's understanding of the disklabel, and then attempt to write the label to disk. Therefore, even if the area of the disk containing the disklabel is unreadable, you will be able to mount(8) it until the next reboot.

How does OpenBSD/amd64 boot?

Details on the amd64 bootstrapping procedures are given in the boot_amd64(8) man page. There are four key pieces to the boot process:
  1. Master Boot Record (MBR) and GUID Partition Table (GPT): The fdisk(8) man page contains detailed explanations.
  2. Partition Boot Record (PBR): The first-stage boot loader biosboot(8) occupies the first 512 bytes of the OpenBSD partition of the disk and is therefore called the PBR. It is installed by installboot(8).
  3. Second Stage Boot Loader /boot: The boot(8) program is loaded by the PBR and has the task of accessing the OpenBSD file system through the machine's BIOS. It locates and loads the kernel.
  4. Kernel: /bsd: The goal of the boot process is to have the OpenBSD kernel loaded into RAM and properly running. Once the kernel has loaded, OpenBSD accesses the hardware directly, no longer through the BIOS.
So, the very start of the boot process could look like this:
Using drive 0, partition 3.                      <- MBR
Loading....                                      <- PBR
probing: pc0 com0 com1 apm mem[636k 190M a20=on] <- /boot
disk: fd0 hd0+
>> OpenBSD/i386 BOOT 3.26
booting hd0a:/bsd 4464500+838332 [58+204240+181750]=0x56cfd0
entry point at 0x100120

[ using 386464 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993       <- Kernel
        The Regents of the University of California.  All rights reserved.

Soft updates

Soft updates are based on an idea proposed by Greg Ganger and Yale Patt and developed for FreeBSD by Kirk McKusick. Soft updates imposes a partial ordering on the buffer cache operations which permits the requirement for synchronous writing of directory entries to be removed from the FFS code. A large increase is seen in diskwriting performance as a result.

Enabling soft updates must be done with a mount-time option. When mounting a partition with the mount(8) utility, you can specify that you wish to have soft updates enabled on that partition. Below is a sample fstab(5) entry that has one partition sd0a that we wish to have mounted with soft updates.

/dev/sd0a / ffs rw,softdep 1 1
Note to sparc users: Do not enable soft updates on sun4 or sun4c machines. These architectures support only a very limited amount of kernel memory and cannot use this feature. However, sun4m machines are fine.

Duplicating your root partition: /altroot

OpenBSD provides an /altroot facility in the daily(8) scripts. If the environment variable ROOTBACKUP=1 is set in either /etc/daily.local or root's crontab(5), and a partition is specified in fstab(5) as mounting to /altroot with the mount options of xx, every night the entire contents of the root partition will be duplicated to the /altroot partition.

Assuming you want to back up your root partition to the partition specified by the DUID bfb4775bb8397569.a, add the following to /etc/fstab

bfb4775bb8397569.a /altroot ffs xx 0 0
and set the appropriate environment variable in /etc/daily.local:
# echo ROOTBACKUP=1 >>/etc/daily.local
As the /altroot process will capture your /etc directory, this will make sure any configuration changes there are updated daily. This is a "disk image" copy done with dd(1) not a file-by-file copy, so your /altroot partition should be at least the same size as your root partition. Generally, you will want your /altroot partition to be on a different disk that has been configured to be fully bootable should the primary disk fail.

Can I access data on filesystems other than FFS?

Yes. Start with the mount(8) manual which contains examples explaining how to mount some of the most commonly used filesystems. A partial list of supported filesystems and related commands can be obtained with
$ man -k -s 8 mount
Note that support may be limited to read-only operation.

Mounting disk images in OpenBSD

To mount a disk image in OpenBSD you must configure a vnd(4) device. For example, if you have an ISO image located at /tmp/ISO.image, you would take the following steps to mount the image.
# vnconfig vnd0 /tmp/ISO.image
# mount -t cd9660 /dev/vnd0c /mnt
Since this is an ISO 9660 image, as used by CDs and DVDs, you must specify type of cd9660 when mounting it.

To unmount the image and unconfigure the vnd(4) device, do:

# umount /mnt
# vnconfig -u vnd0
For more information, refer to vnconfig(8) and mount(8).

Why does df(1) tell me I have over 100% of my disk used?

People are sometimes surprised to find they have negative available disk space, or more than 100% of a filesystem in use, as shown by df(1).

When a filesystem is created with newfs(8), some of the available space is held in reserve from normal users. This provides a margin of error when you accidentally fill the disk, and helps keep disk fragmentation to a minimum. Default for this is 5% of the disk capacity, so if the root user has been carelessly filling the disk, you may see up to 105% of the available capacity in use.

If the 5% value is not appropriate for you, you can change it with the tunefs(8) command.

How do I use softraid?

The softraid(4) subsystem works by emulating a scsibus(4) with sd(4) devices made by combining a number of OpenBSD disklabel(8) partitions into a virtual disk with the desired RAID level. Note that only RAID0, RAID1, RAID5 and crypto are fully supported at the moment. This virtual disk is treated as any other disk, first partitioned with fdisk (on fdisk platforms) and then disklabels are created as usual.

Some words on RAID in general:

Installing to a mirror

The tools to assemble your softraid system are in the basic OpenBSD install (for adding softraid devices after install), but they are also available on the CD-ROM and bsd.rd for installing your system to a softraid setup. This section covers installing OpenBSD to a mirrored pair of hard drives, and assumes familiarity with the installation process and ramdisk kernel. Disk setup may vary from platform to platform, and booting from softraid devices isn't supported on all of them. It's currently only possible to boot from RAID1, RAID5 and crypto volumes on i386, amd64 and sparc64.

The installation process will be a little different than the standard OpenBSD install, as you will want to drop to the shell and create your softraid(4) drive before doing the install. Once the softraid(4) disk is created, you will perform the install relatively normally, placing the partitions you wish to be RAIDed on the newly configured drive. If it sounds confusing at first, don't worry. All the steps will be explained in detail.

The install kernel only has the /dev entries for one wd(4) device and one sd(4) device on boot, so you will need to manually create more disk devices if your desired softraid setup requires them. This process is normally done automatically by the installer, but you haven't yet run the installer, and you will be adding a disk that didn't exist at boot. For example, if we needed to support a second wd(4) device for a mirrored setup, you could do the following from the shell prompt:

Welcome to the OpenBSD/amd64 X.X installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? s
# cd /dev
# sh MAKEDEV wd1
You now have full support for the wd0 and wd1 devices.

Next, we'll initialize the disks with fdisk(8) and create the softraid partition with disklabel(8). An "a" partition will be made on both of the drives for the new RAID device.

# fdisk -iy wd0
Writing MBR at offset 0.
# fdisk -iy wd1
Writing MBR at offset 0.
# disklabel -E wd0
Label editor (enter '?' for help at any prompt)
> a a
offset: [2104515]
size: [39825135] *
FS type: [4.2BSD] RAID
> w
> q
No label changes.
You'll notice that we initialized both disks, but only created a partition layout on the first drive. That's because you can easily import the drive's configuration directly with the disklabel(8) command.
# disklabel wd0 > layout
# disklabel -R wd1 layout
# rm layout
The "layout" file in this example can be named anything.

Next, create the mirror with the bioctl(8) command.

# bioctl -c 1 -l wd0a,wd1a softraid0
Note that if you are creating multiple RAID devices, either on one disk or on multiple devices, you're always going to be using the softraid0 virtual disk interface driver. You won't be using "softraid1" or others. The "softraid0" there is a virtual RAID controller, and you can hang many virtual disks off this controller.

The new pseudo-disk device will show up as sd0 here, assuming there are no other sd(4) devices on your system. This device will now show on the system console and dmesg as a newly installed device:

scsibus1 at softraid0: 1 targets
sd0 at scsibus2 targ 0 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd0: 10244MB, 512 bytes/sec, 20980362 sec total
This shows that we now have a new SCSI bus and a new disk, sd0. This volume will be automatically detected and assembled from this point onward when the system boots.

Because the new device probably has a lot of garbage where you expect a master boot record and disklabel, zeroing the first chunk of it is highly recommended. Be very careful with this command; issuing it on the wrong device could lead to a very bad day. This assumes that the new softraid device was created as sd0.

# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
You are now ready to install OpenBSD on your system. Perform the install as normal by invoking "install" or "exit" at the boot media console. Create all the partitions on your new softraid disk (sd0 in our example here) that should be there, rather than on wd0 or wd1 (the non-RAID disks).

Now you can reboot your system and, if you have done things properly, it will automatically assemble your RAID set and mount the appropriate partitions.

To check on the status of your mirror, issue the following command:

# bioctl sd0
A nightly cron job to check the status might also be a good idea.

Full disk encryption

Much like RAID, full disk encryption in OpenBSD is handled by the softraid(4) subsystem and bioctl(8) command. This section covers installing OpenBSD to a single encrypted disk, and is a very similar process to the previous one.

Select (S)hell at the initial prompt.

Welcome to the OpenBSD/amd64 X.X installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? s
From here, you'll be given a shell within the live environment to manipulate the disks. For this example, we will install to the wd0 SATA drive, erasing all of its previous contents. You may want to write random data to the drive first with something like the following:
# dd if=/dev/random of=/dev/rwd0c bs=1m
This can be a very time-consuming process, depending on the speed of your CPU and disk, as well as the size of the disk. If you don't write random data to the whole device, it may be possible for an adversary to deduce how much space is actually being used.

Next, we'll initialize the disk with fdisk(8) and create the softraid partition with disklabel(8).

# fdisk -iy wd0
Writing MBR at offset 0.
# disklabel -E wd0
Label editor (enter '?' for help at any prompt)
> a a
offset: [2104515]
size: [39825135] *
FS type: [4.2BSD] RAID
> w
> q
No label changes.
We'll use the entire disk, but note that the encrypted device can be split up into multiple mount points as if it were a regular hard drive. Now it's time to build the encrypted device on our "a" partition.
# bioctl -c C -l wd0a softraid0
New passphrase:
Re-type passphrase:
sd0 at scsibus2 targ 1 lun 0: <OPENBSD, SR CRYPTO, 005> SCSI2 0/direct fixed
sd0: 19445MB, 512 bytes/sector, 39824607 sectors
softraid0: CRYPTO volume attached as sd0
All data written to sd0 will now be encrypted (with AES in XTS mode) by default.

As in the previous example, we'll overwrite the first megabyte of our new pseudo-device.

# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
Type exit to return to the main installer, then choose this new device as the one for your installation.
Available disks are: wd0 sd0.
Which disk is the root disk? ('?' for details) [wd0] sd0
You will be prompted for the passphrase on startup, but all other operations should be handled transparently.

Encrypting external disks

As we just illustrated, cryptographic softraid(4) volumes are set up rather simply. This section explains how you might do so for an external USB flash drive, but can be applied to any disk device. If you already read the section on full disk encryption, this should be very familiar. An outline of the steps is as follows: A quick example runthrough of the steps follows, with sd0 being the USB drive.
# dd if=/dev/random of=/dev/rsd0c bs=1m
# fdisk -iy sd0
# disklabel -E sd0 (create an "a" partition, see above for more info)
# bioctl -c C -l sd0a softraid0
New passphrase:
Re-type passphrase:
softraid0: CRYPTO volume attached as sd1
# dd if=/dev/zero of=/dev/rsd1c bs=1m count=1
# disklabel -E sd1 (create an "i" partition, see above for more info)
# newfs sd1i
# mkdir -p /mnt/secretstuff
# mount /dev/sd1i /mnt/secretstuff
# mv planstotakeovertheworld.txt /mnt/secretstuff/
# umount /mnt/secretstuff
# bioctl -d sd1
Next time you need to access the drive, simply use bioctl(8) to attach it and then repeat the last four commands as needed.

The man page for this looks a little scary, as the -d command is described as "deleting" the volume. In the case of crypto, however, it just deactivates encrypted volume so it can't be accessed until it is activated again with the passphrase.

Many other options are available with softraid, and new features are being added and improvements made, so do consult the aforementioned man pages for detailed information.

I forgot my passphrase!

Sorry. This is real encryption, there's not a back door or magic unlocking tool. If you lose your passphrase, your data on your softraid crypto volume will be unusable.

Disaster recovery

This is the section you want to skip over, but don't. This is the reason for RAID -- if disks never failed, you wouldn't add the complexity of RAID to your system! Unfortunately, as failures are very difficult to list comprehensively, there is a strong probability that the event you experience won't be described exactly here, but if you take the time to understand the strategies here, and the WHY, hopefully you can use them to recover from whatever situations come your way.

Keep in mind, failures are often not simple. The author of this article had a drive in a hardware RAID solution develop a short across the power feed, which in addition to the drive itself, also required replacing the power supply, the RAID enclosure and a power supply on a second computer he used to verify the drive was actually dead, and the data from backup as he didn't properly configure the replacement enclosure.

The steps needed for system recovery can be performed in single user mode, or from the install kernel (bsd.rd).

If you plan on practicing softraid recovery (and we highly suggest you do so!), you may find it helpful to zero a drive you remove from the array before you attempt to return it to the array. Not only does this more accurately simulate replacing the drive with a new one, it will avoid the confusion that can result when the system detects the remains of a softraid array.

Recovery from a failure will often be a two-stage event -- the first stage is bringing the system back up to a running state, the second stage is to rebuild the failed array. The two stages may be separated by some time if you don't have a replacement drive handy.

Recovery from drive failure: secondary

This is relatively easy. You may have to remove the failed disk to get the system back up.

When you are ready to repair the system, you will replace the failed drive, create the RAID and other disklabel partitions, then rebuild the mirror. Assuming your RAID volume is sd0, and you are replacing the failed device with wd1m, the following process should work:

# bioctl -R /dev/wd1m sd0
# reboot

Recovery from drive failure: primary

Many PC-like computers can not boot from a second drive if the primary drive has failed, but still attached unless it is so dead it isn't detected. Many can not boot from a drive that isn't the "primary", even if there is no other drive.

In general, if your primary drive fails, you will have to remove it, and in many cases "promote" your secondary drive to primary configuration before the system will boot. This may involve re-jumpering the disk, plugging the disk into another port or some other variation. Of course, what is on the secondary disk has to not only include your RAID partition, but also has to be functionally bootable.

Once you have the system back up on the secondary disk and a new disk in place, you rebuild as above.

Recovery from "shuffling" your disks

What if you have four disks in your system, say, sd0, sd1, sd2, and sd3, and for reasons of hardware replacement or upgrade, you end up with the drives out of the machine, and lose track of which was which?

Fortunately, softraid handles this very well, it considers the disks "roaming," but will successfully rebuild your arrays. However, the boot disk in the machine has to be bootable, and if you just made changes in the root partition before doing this, you probably want to be sure you didn't boot from your altroot partition by mistake.

Softraid notes

Complications when other sd(4) disks exist

Softraid disks are assembled after all other IDE, SATA, SAS and SCSI disks are attached. As a result, if the number of sd(4) devices changes (either by adding or removing devices -- or if a device fails), the identifier of the softraid disk will change. For this reason, it's important to use DUIDs (Disklabel Unique Identifiers) rather than drive names in your fstab(5) file.

Three disk RAID1?

Softraid supports RAID1 with more than two "chunks," and the man page examples show a three-disk RAID1 configuration. RAID1 simply duplicates the data across all the chunks of storage. Two gives full redundancy, three gives additional fault tolerance. The advantage of RAID1 with three (or more) disks/chunks is that, in event of one disk failure, you still have complete redundancy. Think of it as a hot spare that doesn't need time to rebuild!