As you might be aware of, I am working on a minimal BSD/UNIX system called smolBSD. It is based on the NetBSD operating system, and its main target is virtual machines, more specifically microvms. This system is capable of fully booting the OS in less than one second using a specially trimmed kernel along with small, specialized root systems.
I was stunned to learn (but am I wrong?) that this work does not seem to have an equivalent, not even in the Linux world. FreeBSD Firecracker support sure permits to boot a kernel is about the same time my NetBSD hacked kernel does, but there’s no work around s slim and task dedicated root filesystem.
QEMU
2025/01 Update https://github.com/NetBSDfr/smolBSD/tree/main/k8s
I had to do it.
So here’s how to run a NetBSD micro-vm as… a Kubernetes pod.
First thing is to modify the start script from the previous article in order to add Docker-style networking, i.e. port forwarding from the host to the micro-vm. This is done using the hostfwd
flag in qemu’s -netdev
parameter
#!/bin/sh
kernel=$1
img=${2:-"root.img"}
[ -n "$3" ] && drive2="-drive file=${3},if=virtio"
qemu-system-x86_64 -enable-kvm -m 256 \
-kernel $kernel -append "console=com root=ld0a" \
-serial mon:stdio -display none \
-drive file=${img},if=virtio $drive2 \
-netdev user,id=net0,hostfwd=tcp::8080-:80 -device virtio-net,netdev=net0
In the previous experience we mapped the kernel and the root image from the host using Docker’s -v
parameter, and while it’s possible to map files from the host using a Kubernetes volume
, we will bundle NetBSD these files into the Docker image to make things easier.
Please refer to mksmolnb documentation to learn how to produce a minimal nginx
micro-vm.
I have this little toy project for quite a while now, and I have this idea of handling a fleet of NetBSD micro-vms with Kubernetes since I started my new job in which I am caring a k8s cluster.
I came to realize that starting a smolBSD micro-vm with Docker was not so difficult after all. Using mksmolnb
’s startnb.sh I came up with this very simple Dockerfile
:
FROM alpine:latest
RUN apk add --quiet --no-cache qemu-system-x86_64 iproute2 bridge-utils
COPY startnb.sh ./
COPY qemu/qemu-ifup qemu/qemu-ifdown /etc/
CMD /startnb.sh /netbsd-SMOL ${IMG} ${DISK}
qemu-ifup
being a simple copy of Debian’s /etc/qemu-ifup
.
I keep reading overcomplicated QEMU/KVM command lines, when really, to start a VirtIO disk and bridged VirtIO NIC virtual machine, only this command is needed:
$ sudo qemu-system-x86_64 -enable-kvm -m 1024 -cpu host -daemonize \
-drive file=mydisk.img,if=virtio \
-net nic,model=virtio -net tap,ifname=tap0
- drive type is
virtio
- nic model is
virtio
and the interface is oftap
type, this will summon/etc/qemu-ifup
to attach the interface to your bridge.
Depending on your QEMU installation, either this will fire up a window showing your virtual machine booting, or start a VNC server on port 5900
.
Depuis un petit moment, je constate que les derniers builds du QEMU contenus dans KVM explosent en vol au démarrage de la VM. Flemmard, je continuais donc à utiliser le QEMU de la version 68 qui, lui, fonctionnait. Et puis ça a fini par vraiment me démanger. Je comprend à grands renforts de gdb
que lorsque l’output SDL
est activé en même temps que le support d’une carte réseau virtuelle, QEMU coredumpe.
À tout hasard, j’essaye de passer à un modele de carte virtuelle different de la rtl8139
émulée par defaut, puisque les dernieres versions de QEMU/KVM en supportent desormais bien plus qu’auparavant, et là, bingo, plus de Segmentation Fault. Ainsi, le démarrage de mes VMs NetBSD ressemble désormais à ceci :