bhyve is FreeBSD's hypervisor. The native setup guide is at here. However, there exists a higher level wrapper called vm-bhyve available. I'm using that.

I'm following the installation guide in README. In a summary:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
pkg install vm-bhyve
pkg install grub2-bhyve # for linux guests
pkg install bhyve-firmware # for UEFI support
zfs create zroot/vm

# update /etc/rc.conf
sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/vm"

vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/vm/.templates/

# This creates network interface `vm-public`
vm switch create public
vm switch add public em0

vm iso http://repo1.sea.innoscale.net/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso
vm iso http://releases.ubuntu.com/18.04.4/ubuntu-18.04.4-live-server-amd64.iso

A few notes:

I created a zfs dataset zroot/vm for storing the vms, and set vm_dir to zfs:zroot/vm.

I followed the guide to name the vm switch public. And I can see a network interface vm-public was created for me:

1
2
3
4
5
6
7
8
9
vm-public: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 06:b3:5f:a8:b6:2c
nd6 options=1<PERFORMNUD>
groups: bridge vm-switch viid-4c918@
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 1 priority 128 path cost 200000

Choice of Linux distro and ISO

I initially want to use CentOS. However CentOS by default uses graphic installation, but I wasn't able to get that working (bhyeve seems to support VNC for UEFI graphic but I didn't get that working).

I had better luck with Ubuntu server. It offers text mode installation. And I was able to start and install it:

1
2
3
4
5
6
vm iso http://releases.ubuntu.com/18.04.4/ubuntu-18.04.4-live-server-amd64.iso
vm create -t ubuntu ub
vm install ub ubuntu-18.04.4-live-server-amd64.iso

vm console ub # attaches to the guest OS console
# ... in a few moment Ubuntu ISO started and installation proceeded normally ...

The security update at the end of Ubuntu installation did fail, so I chose "cancel update and reboot".

Fixing Grub

After reboot, Ubuntu server booted into grub prompt. This SO post is helpful. It means grub can't find root partition. So in grub prompt:

1
2
3
4
5
set prefix=(hd0,gpt2)/boot/grub
set root=(hd0,gpt2)
insmod linux
insmod normal
normal

This started ubuntu normally. Then issue command to fix grub:

1
sudo update-grub

However after reboot, it stuck in grub prompt again...*

[WORKING SOLUTION] it was due to Ubuntu Server installs boot to 2nd partition, but with default bhyve grub looks for boot in 1st partition. See here.

The fix is to add:

grub_run_partition="2"

into the VM's conf (/zroot/vm/ub/ub.conf).

Now the VM can start normally.

Here's the VM config that works for Ubuntu Server 18.04.4:

1
2
3
4
5
6
7
8
9
loader="uefi"
# ubuntu server installs boot to 2nd partition
grub_run_partition="2"
cpu=1
memory=512M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

Cloud images

Trying out using cloud images. The ubuntu server minimal is only 100MB+

1
2
pkg install qemu-utils  # needed for using cloud images
vm img https://cloud-images.ubuntu.com/minimal/releases/bionic/release-20200318/ubuntu-18.04-minimal-cloudimg-amd64.img

Cloud image doesn't allow password login. Luckily vm-bhyve also supports cloud init. So you can inject SSH public keys:

1
2
pkg install cdrkit-genisoimage  # required by cloud init
vm create -t ubuntu_server -i ubuntu-18.04-minimal-cloudimg-amd64.img -C -k ~/.ssh/id_rsa.pub ub2

Cloud init

Cloud init is two part:

  1. the cloud-init service running inside the guest machine, reading data sources (e.g., from a mounted ISO system "seed.iso" to read init config.
  2. the preparation of the "seed.iso" to "inject" the init config. This is done on the host, before booting the guest VM for the first time.

At the moment, vm-bhyve doesn't support all the cloud-init functions. Its support is basically upon vm create command:

  1. it reads the cloud-init related configs it understands (very limited)
  2. dump them into <vm>/.cloud-init folder
  3. create the seed.iso by invoking genisoimage -output ./seed.iso -volid cidata -joliet -rock .cloud-init/*

And cloud-init will be triggered the first time the VM is booted.

So to circumvent vm-bhyve's limitation, I:

  1. create a new VM
  2. create a .cloud-init folder with proper config files as I desire
  3. create the seed.iso manually
  4. boot VM to trigger cloud-init

Cloud-init doc is hard to read, but can be found here. Specifically:

  • user-data spec
  • network config: https://cloudinit.readthedocs.io/en/latest/topics/network-config.html

To allow user password login, use a user-data content:

1
2
3
4
5
6
7
8
9
10
11
#cloud-config
groups:
- ubuntu: [root,sys]
- cloud-users

users:
- default
- name: <username>
groups: users
lock_passwd: false
passwd: <pasword hash>

The passwd field is generated on another Linux machine, using mkpasswd --method=SHA-512 --rounds=4096. See here.

A few tips:

  • on FreeBSD, the content of the seed.iso can be easily inspected by tar xzf seed.iso (careful this dumps the content to pwd, so might do this in a new folder)
  • the cloud-init operation can be seen on the boot message from the guest console, when the VM is booted for the first time. Subsequent boots doesn't seem to trigger cloud-init.

Some Networking Learning

1
2
3
4
5
6
7
8
9
10
11
ifconfig bridge create [bridge-name]
ifconfig tap create [tap-name]

# assign address to interface
ifconfig <interface> <addr>

# link networks using a bridge/switch:
ifconfig <bridge_interface> addm <interface_to_link_1> addm <interface_to_link_2> ...

# e.g., connecting tap0, tap1 to the internet connected em0:
ifconfig bridge0 addm em0 addm tap0 addm tap1
  • Bridge basically connects multiple networks at MAC layer, i.e., forwarding traffic to the medium of the connected interfaces.