iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🦄

Getting Started with Unikraft: A Development Tool for Unikernel Applications

に公開

What is Unikraft

Unikraft is a development kit for running applications on unikernel environments. It is released as open source under the Linux Foundation and the Xen Project.

https://unikraft.org/

https://github.com/unikraft/unikraft

Features

As stated on the project's top page, it has the following features:

  • Easily create high-performance and lightweight unikernel environments.
  • Run applications as lightweight and fast as containers in a securely isolated environment like a virtual machine.
  • Integration with mainstream container management tools such as Docker.
  • Compatibility with the Linux API.

While there are several other tools for managing unikernels, Unikraft specifically aims to achieve the following:

  • Take the best of both worlds from traditional VMs and containers to create an application platform (unikernel) that is as secure and highly isolated as a VM while being as fast and lightweight as a container.
  • Provide development tools to simplify the building and deployment of applications running on unikernel environments without requiring users to have advanced knowledge.

These aspects are described in the concepts section below:

https://unikraft.org/docs/concepts

The concept of balancing a secure environment like a VM with the lightness of a container is similar to projects like hyperlight and kata container, which I have introduced previously. In a broad sense, Unikraft can also be classified into this category.

Furthermore, Unikraft is designed so that unikernel applications can be operated with commands such as kraft run and kraft logs, similar to docker commands, allowing those who are already familiar with docker commands to transition seamlessly. As mentioned in the phrase "Ease of use, including integration with Dockerfiles and other mainstream tools," this level of compatibility seems to be a major selling point.

In addition, it supports packaging and running applications written in various languages and frameworks such as Python, Go, and Rust on unikernels. The ability to execute applications regardless of the execution platform can be said to inherit the characteristics of containers.

How it Works

Briefly explained, Unikraft uses VMMs such as Qemu, Xen, and Firecracker to manage applications built for unikernels.


How Unikraft works. (d) corresponds to Unikraft. Quoted from Concept

By using a VMM, it achieves a level of isolation equivalent to a hypervisor-based VM, and by building the application into a unikernel for execution, it realizes container-like speed and lightness. Unikraft consists of parts for building applications for this purpose, a CLI interface for managing applications with a VMM, and libraries for running applications on unikernels. For more details, refer to the concepts or internals pages.

Verification Environment

In this article, we will verify the operation of Unikraft in the following environment created on Openstack:

  • OS: Ubuntu 24.03
  • Memory: 4 GB
  • vCPU: 4
  • Nested virtualization enabled

While hardware requirements could not be confirmed, since Unikraft promotes itself as being lightweight, it likely does not require significant resources.
As a preliminary step, we will install QEMU and Docker.

Installing the kraft Command

In Unikraft, you basically use the CLI (kraft) to build and run unikernel applications. Various installation methods are described at https://unikraft.org/docs/cli/install, but the following command is the easiest as it installs dependencies all at once.

curl -sSfL https://get.kraftkit.sh | sh

Following the instructions will install the kraft command and its dependency packages. If you run kraft and see a display like the following, you're all set.

$ kraft
    .
   /^\     Build and use highly customized and ultra-lightweight unikernels.
  :[ ]:
  | = |    Version:          0.11.6
 /|/=\|\   Documentation:    https://unikraft.org/docs/cli
(_:| |:_)  Issues & support: https://github.com/unikraft/kraftkit/issues
   v v     Platform:         https://unikraft.cloud
   ' '

USAGE
  kraft [FLAGS] SUBCOMMAND

...

Running the unikernel Sample App

As a first step, let's run a simple application using the kraft command, referring to the following documentation.

https://unikraft.org/guides/using-the-app-catalog

helloworld

In Unikraft, several basic applications like helloworld and nginx are published as pre-packaged application catalogs. The source code is available in the unikraft/catalog repository, so you can also build them yourself.

https://github.com/unikraft/catalog

Apps in the application catalog are already uploaded to the application registry in a format similar to Docker, such as unikraft.org/nginx:1.15. This allows you to easily pull and use images from your local environment without having to build them yourself. For example, the following command pulls and runs a pre-built helloworld unikernel application from the application registry.

kraft run -W unikraft.org/helloworld

While the above should work normally, in my environment, an error related to QEMU occurred and it didn't work well. So, here I will try building it manually from the source code in the catalog repository.
First, clone the catalog repository.

git clone https://github.com/unikraft/catalog.git

Applications uploaded to the application registry are located in library, so move there.

cd library/helloworld

Since the files necessary for image creation are already prepared, you can create it with the kraft build command.
When you run kraft build, you will be asked for which VMM and architecture you want to build the kernel (build target). Select qemu/x86_64.

$ kraft build
[?] select target:
  ▸ helloworld (fc/x86_64)
    helloworld (qemu/arm64)
    helloworld (qemu/x86_64)
    helloworld (xen/x86_64)

This will execute the unikernel build. After waiting a while, the unikernel kernel image will be generated in .unikraft/build/helloworld_qemu-x86_64.

$ kraft build
[?] select target: helloworld (qemu/x86_64)
[+] updating index... done!                                                                                    [11.1s]
[+] finding core/unikraft:stable... done!                                                                       [0.0s]
[+] pulling core/unikraft:stable         ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [1.3s]
[+] configuring helloworld (qemu/x86_64) ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [0.9s]
[+] building helloworld (qemu/x86_64)    ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [3.2s]

[●] Build completed successfully!

 └─ kernel: .unikraft/build/helloworld_qemu-x86_64 (242 kB)

Learn how to package your unikernel with: kraft pkg --help

Once the image is created, running kraft run -W . will execute the helloworld application running on the unikernel.

$ kraft run -W .
 i  using arch=x86_64 plat=qemu
Powered by
o.   .o       _ _               __ _
Oo   Oo  ___ (_) | __ __  __ _ ' _) :_
oO   oO ' _ `| | |/ /  _)' _` | |_|  _)
oOo oOO| | | | |   (| | | (_) |  _) :_
 OoOoO ._, ._:_:_,\_._,  .__,_:_, \___)
                  Kiviuq 0.20.0~5a22d73
Hello from Unikraft!

This message shows that your installation appears to be working correctly.

For more examples and ideas, visit: https://unikraft.org/docs/

In this way, the "application running on a unikernel" executed by the run command is referred to in the documentation as a unikernel instance, so I will use this term consistently from now on. Compared to docker commands, it is designed so that you can create a unikernel instance with kraft run just as you would create a container with docker run.

nginx

That wasn't particularly exciting, so let's try nginx next. The code corresponding to the nginx image is located in library/nginx/[nginx_version], so navigate there.

cd ../nginx/1.25

When building nginx, the build is performed from the Dockerfile in the same directory using BuildKit. If a connectable buildkitd is not running, a temporary buildkitd container will be created, but since this takes some time, it is recommended to start a container in advance and set it as the communication target.

docker run -d --name buildkitd --privileged moby/buildkit:latest
export KRAFTKIT_BUILDKIT_HOST=docker-container://buildkitd

Run the build. You can also specify the platform and architecture as options in advance.

$ kraft build --plat qemu --arch x86_64

Since nginx listens on port 80 within the unikernel instance, specify a port mapping so that traffic to port 8080 on the host side is routed there. Just like with the docker command, port mapping can be specified using -p [host port:instance port].

$ kraft run -W -p 8080:80
 i  using arch=x86_64 plat=qemu
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)
en1: Set IPv4 address 10.0.2.15 mask 255.255.255.0 gw 10.0.2.2

Open another terminal and access port 8080 on the host side. You will connect to nginx on the unikernel and receive a response.

$ curl 0.0.0.0:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Just like with Docker, you can check running unikernel instances using kraft ps.

$ kraft ps
NAME             KERNEL                       ARGS            CREATED       STATUS   MEM  PORTS                 PLAT
xenodochial_ham  project://nginx:qemu/x86_64  /usr/bin/nginx  1 minute ago  running  64M  0.0.0.0:8080->80/tcp  qemu/x86_64

By the way, the kraft command allows you to change the log level using the --log-level [log_level] option or the KRAFTKIT_LOG_LEVEL environment variable. For example, by specifying debug, you can see the details of the qemu process running on the host.

$ kraft run --log-level debug -W -p 8080:80
 D  kraftkit version=0.11.6
 D  determining how to proceed given provided input and context
 D  using compatible context candidate=kraftfile-unikraft
 i  using arch=x86_64 plat=qemu
 D  qemu-system-x86_64 -version
 D  qemu-system-x86_64 -accel help
 D  qemu-system-x86_64 -append /usr/bin/nginx  -cpu qemu64,+pdpe1gb,+rdrand,-vmx,-svm -daemonize -device virtio-net-pci,mac=02:b0:b0:92:ca:01,netdev=hostnet0 -device pvpanic -device sga -display none -kernel /home/ubuntu/catalog/library/nginx/1.25/.unikraft/build/nginx_qemu-x86_64 -machine pc -m size=64M -monitor unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/mon.sock,server,nowait -name fe81b0bfb8f3 -netdev user,id=hostnet0,hostfwd=tcp::8080-:80 -nographic -no-reboot -S -parallel none -pidfile /home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/machine.pid -qmp unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/ctrl.sock,server,nowait -qmp unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/evnt.sock,server,nowait -rtc base=utc -serial file:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/vm.log -smp cpus=1,threads=1,sockets=1 -vga none
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)
en1: Set IPv4 address 10.0.2.15 mask 255.255.255.0 gw 10.0.2.2

As mentioned before, since Unikraft manages unikernels using a VMM, running the kraft run command starts a VMM process like qemu on the host, which serves as the actual unikernel instance. You can verify the qemu process corresponding to the unikernel instance by running the ps command.

$ ps aux | grep qemu
ubuntu     95017  1.3  2.9 882184 118428 ?       Sl   08:33   0:00 qemu-system-x86_64 -append /usr/bin/nginx  -cpu qemu64,+pdpe1gb,+rdrand,-vmx,-svm -daemonize -device virtio-net-pci,mac=02:b0:b0:92:ca:01,netdev=hostnet0 -device pvpanic -device sga -display none -kernel /home/ubuntu/catalog/library/nginx/1.25/.unikraft/build/nginx_qemu-x86_64 -machine pc -m size=64M -monitor unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/mon.sock,server,nowait -name fe81b0bfb8f3 -netdev user,id=hostnet0,hostfwd=tcp::8080-:80 -nographic -no-reboot -S -parallel none -pidfile /home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/machine.pid -qmp unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/ctrl.sock,server,nowait -qmp unix:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/evnt.sock,server,nowait -rtc base=utc -serial file:/home/ubuntu/.local/share/kraftkit/runtime/fe81b0bfb8f3/vm.log -smp cpus=1,threads=1,sockets=1 -vga none

The application (unikernel) that runs on the unikernel instance is generated in .unikraft/build/ during the kraft build execution. If we look at the contents for nginx, we can see that various libraries have been generated, and among them is the executable file nginx_qemu-x86_64 corresponding to the specified platform and architecture.

ls -1 .unikraft/build/
...
nginx_qemu-x86_64
nginx_qemu-x86_64.bootinfo
nginx_qemu-x86_64.bootinfo.cmd
nginx_qemu-x86_64.cmd
nginx_qemu-x86_64.dbg
nginx_qemu-x86_64.dbg.cmd
nginx_qemu-x86_64.dbg.gdb.py
nginx_qemu-x86_64.multiboot.cmd
provided_syscalls.in.cmd
provided_syscalls.in.new.cmd
uk-gdb.py

In the qemu process, nginx_qemu-x86_64 is passed to the -kernel argument. The nginx application itself is defined in the Dockerfile located in the same directory. It uses a multi-stage build where the final image is based on scratch, copying over the executable, log files, configuration files, and dependency libraries required for nginx to function.

Dockerfile
FROM nginx:1.25.3-bookworm AS build

# These are normally syminks to /dev/stdout and /dev/stderr, which don't
# (currently) work with Unikraft. We remove them, such that Nginx will create
# them by hand.
RUN rm /var/log/nginx/error.log
RUN rm /var/log/nginx/access.log

FROM scratch

# Nginx binaries, modules, configuration, log and runtime files
COPY --from=build /usr/sbin/nginx /usr/bin/nginx
COPY --from=build /usr/lib/nginx /usr/lib/nginx
COPY --from=build /etc/nginx /etc/nginx
COPY --from=build /etc/passwd /etc/passwd
COPY --from=build /etc/group /etc/group
COPY --from=build /var/log/nginx /var/log/nginx
COPY --from=build /var/cache/nginx /var/cache/nginx
COPY --from=build /var/run /var/run

# Libraries
COPY --from=build /lib/x86_64-linux-gnu/libcrypt.so.1 /lib/x86_64-linux-gnu/libcrypt.so.1
COPY --from=build /lib/x86_64-linux-gnu/libpcre2-8.so.0 /lib/x86_64-linux-gnu/libpcre2-8.so.0
COPY --from=build /lib/x86_64-linux-gnu/libssl.so.3 /lib/x86_64-linux-gnu/libssl.so.3
COPY --from=build /lib/x86_64-linux-gnu/libcrypto.so.3 /lib/x86_64-linux-gnu/libcrypto.so.3
COPY --from=build /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libz.so.1
COPY --from=build /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6
COPY --from=build /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2
COPY --from=build /etc/ld.so.cache /etc/ld.so.cache

# Custom configuration files, including using a single process for Nginx
COPY ./conf/nginx.conf /etc/nginx/nginx.conf

# Web root
COPY ./wwwroot /wwwroot

Kraftfile

In Unikraft, information such as how to build the unikernel and where to get the rootfs from is described in a YAML file called Kraftfile. Just as you can always build the same image if you have a Dockerfile for Docker, Unikraft uses a Kraftfile to manage the building, packaging, and deployment of applications. Refer to the following for the Kraftfile specification:

https://unikraft.org/docs/cli/reference/kraftfile/v0.6

Application Catalog

The examples directory of the application catalog repository contains sample application code for various languages, such as a Go HTTP server, Python Flask, and a Rust HTTP server, which serve as helpful references when creating your own applications. Let's take a look at how a Python application works.

https://github.com/unikraft/catalog/tree/main/examples/flask3.0-python3.12-sqlite3

The above directory contains sample code for running a unikernel instance with a Python Flask + SQLite configuration. First, navigate to the directory.

cd catalog/examples/flask3.0-python3.12-sqlite3

Looking at the file structure within the directory before building, you can see that the only file related to Unikraft is the Kraftfile, and all other files are standard files used with Flask and SQLite.

$ tree
.
├── Dockerfile
├── Kraftfile
├── README.md
├── init_db.py
├── manifest.json
├── requirements.txt
├── schema.sql
├── server.py
├── static
│   └── css
│       └── style.css
└── templates
    ├── base.html
    ├── create.html
    ├── edit.html
    ├── index.html
    └── post.html

The Dockerfile is configured to copy only the necessary libraries based on scratch.

Dockerfile
FROM python:3.12.11-bookworm AS build

WORKDIR /app

COPY requirements.txt /app/
RUN pip3 install -r requirements.txt --no-cache-dir

COPY . /app/
RUN python3 init_db.py

FROM scratch

# SQLite library
COPY --from=build /usr/lib/x86_64-linux-gnu/libsqlite3.so.0 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0

# Python libraries
COPY --from=build /usr/local/lib/python3.12 /usr/local/lib/python3.12

# Application files
COPY --from=build /app /app

The Kraftfile also has a simple structure. Based on python:3.12, it creates a rootfs from the Dockerfile and executes /app/server.py on the unikernel.

Kraftfile
spec: v0.6

name: flask3.0-python3.12-sqlite3

runtime: python:3.12

rootfs: ./Dockerfile

cmd: ["/usr/bin/python3", "/app/server.py"]

The Kraftfile specifies python:3.12 as the runtime, but when executed in my verification environment, it failed to start due to a QEMU error, so I will prepare it by building it manually.
The code corresponding to Python 3.12 is located in catalog/library/python/3.12, so move there and build it.

cd ~/catalog/library/python/3.12
kraft build

The built image can be used in other projects by packaging it with the kraft pkg command. You can specify any name, such as myapp/python:3.12.

kraft pkg --as oci --name myapp/python:3.12

You can verify the created package with kraft pkg ls --apps.

$ kraft pkg ls --apps
TYPE  NAME                     VERSION  FORMAT  CREATED         UPDATED         PULLED          MANIFEST  INDEX    PLAT         SIZE
app   myapp/python             3.12     oci     44 seconds ago  43 seconds ago  43 seconds ago  0bd620f   6acad99  qemu/x86_64  71 MB

Return to the original flask + sqlite directory and replace the runtime in the Kraftfile with the one you created.

Kraftfile
- runtime: python:3.12
+ runtime: myapp/python:3.12

Run the unikernel instance.

$ kraft run -W --rm -p 8080:8080 --plat qemu --arch x86_64 -M 512M
 i  using arch=x86_64 plat=qemu
[+] building rootfs via Dockerfile... done!                                                                                                                                                                                      x86_64 [2.7s]
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)
 * Serving Flask app 'server'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8080
 * Running on http://0.0.0.0:8080
Press CTRL+C to quit
en1: Set IPv4 address 10.0.2.15 mask 255.255.255.0 gw 10.0.2.2

You can access the Flask app from a browser at http://[host ip]:8080.

Since Unikraft features integration with Docker, you can run applications on Unikraft instances relatively easily by providing a Dockerfile as shown here. In other words, if an application is already containerized, it can be integrated into a unikernel instance with minimal effort. Of course, modifications to the Dockerfile and verification of the operation on the unikernel will be necessary.
Note that since a unikernel instance is not a container, you cannot start a shell in the instance environment to attach and debug as you would with docker exec.

What is Happening Behind the Scenes of the Build

When you actually build the code under the library directory, such as nginx, and the code under the examples directory, you will notice differences in the number of generated artifacts. Where do these differences come from?
The following guide explains what is happening during these two types of builds, so let's look at what differences occur based on its content.

https://unikraft.org/guides/catalog-behind-the-scenes

When you execute kraft build for library/nginx/1.25 in the application catalog, artifacts are created under .unikraft/build. These include nginx_qemu-x86_64, which corresponds to the kernel image, and initramfs-x86_64.cpio, a cpio archive corresponding to the root filesystem on the unikernel (initramfs).

$ kraft build --plat qemu --arch x86_64

[●] Build completed successfully!

 ├──── kernel: .unikraft/build/nginx_qemu-x86_64     (15 MB)
 └─ initramfs: .unikraft/build/initramfs-x86_64.cpio (14 MB)
$ .unikraft/build/nginx_qemu-x86_64
.unikraft/build/nginx_qemu-x86_64: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, stripped

$ .unikraft/build/initramfs-x86_64.cpio
.unikraft/build/initramfs-x86_64.cpio: ASCII cpio archive (SVR4 with no CRC)

According to the guide, when kraft build is executed for nginx, the kernel file is generated through the following steps:

  • Read the path specified for rootfs from the Kraftfile and generate the root filesystem. In the case of nginx, the rootfs is generated from a Dockerfile.
  • Pack the rootfs into an initial ramdisk (initrd).
  • Build the kernel based on the kernel settings and other information in the Kraftfile.
  • Embed the initrd into the output kernel file.

Comparing these steps with the generated artifacts, it appears that initramfs-x86_64.cpio corresponds to the initial ramdisk, and nginx_qemu-x86_64 corresponds to the final kernel image (unikernel + nginx) with the initrd embedded.

And looking at the options passed to QEMU when kraft run is executed, we can see that this kernel image is passed to the -kernel option.

 qemu-system-x86_64 \
    -append /usr/bin/nginx \
    ...
    -kernel /home/ubuntu/catalog/library/nginx/1.25/.unikraft/build/nginx_qemu-x86_64 \

On the other hand, when you execute kraft build for sample applications under examples (such as httpserver-go1.21), the kernel image is not built, and only the initramfs (cpio archive) is generated.

$ kraft build --plat qemu --arch x86_64
[+] building rootfs via Dockerfile... done!                                                                                                                                                                                      x86_64 [3.8s]

[●] Build completed successfully!

 └─ initramfs: .unikraft/build/initramfs-x86_64.cpio (10 MB)

Learn how to package your unikernel with: kraft pkg --help

This difference stems from the runtime attribute in the Kraftfile. Because an existing unikernel image is specified in runtime, the build does not execute kernel image generation; instead, only the rootfs is generated from the Dockerfile (though it is replaced with mytest built locally in the following example).

Kraftfile
spec: v0.6
name: httpserver-go1.21
runtime: mytest:base
rootfs: ./Dockerfile
cmd: ["/server"]

Checking the QEMU command for kraft run, you can see that in addition to -kernel, the built cpio archive is passed to the -initrd option.

qemu-system-x86_64 \
    -append "vfs.fstab=[ \"initrd0:/:extract:::\" ] env.vars=[
  \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\" ] -- /server" \
    -initrd .unikraft/build/initramfs-x86_64.cpio \
    -kernel /tmp/kraft-run-166630483/unikraft/bin/kernel \

The kernel image used here is the one specified in runtime, which is downloaded to /tmp/kraft--run-[random characters]/unikraft/bin/kernel in the local environment at runtime and passed to QEMU. Since we have specified mytest:base built in our local environment this time, you can confirm that the hashes match by comparing them with sha256sum.

$ sha256sum ~/catalog/library/base/.unikraft/build/base_qemu-x86_64
a71c0669a7fdb205e581fdef1b797fc7604765fc930ec24adc607e81d5b50d01  /home/ubuntu/catalog/library/base/.unikraft/build/base_qemu-x86_64

$ sha256sum /tmp/kraft-run-166630483/unikraft/bin/kernel
a71c0669a7fdb205e581fdef1b797fc7604765fc930ec24adc607e81d5b50d01  /tmp/kraft-run-166630483/unikraft/bin/kernel

Furthermore, because the kernel image from the runtime is used, there is no need to execute kraft build for this code; you can directly run the unikernel instance with kraft run. When kraft run is executed, the following steps are performed:

  • The kernel image specified in runtime is retrieved and placed in /tmp. In the above case, it is mytest:base.
  • The rootfs is generated from the Dockerfile.
  • The rootfs is packed into an initramfs (initrd) file.
  • The initramfs and the kernel image are passed to QEMU to start the unikernel instance.

The contents of the generated initramfs can be checked with the following cpio command.

$ cpio -itv < .unikraft/build/initramfs-x86_64.cpio
drwxr-xr-x   0 root     root            0 Sep 21 14:21 ./lib
drwxr-xr-x   0 root     root            0 Sep 21 14:21 ./lib/x86_64-linux-gnu
-rwxr-xr-x   1 root     root      1922136 Sep 30  2023 ./lib/x86_64-linux-gnu/libc.so.6
drwxr-xr-x   0 root     root            0 Sep 21 14:21 ./lib64
-rwxr-xr-x   1 root     root       210968 Sep 30  2023 ./lib64/ld-linux-x86-64.so.2
-rwxr-xr-x   1 root     root      8005392 Sep 21 14:21 ./server
19804 blocks

Therefore, in the sample code under examples, it is inferred that the rootfs containing the application is not included within the kernel image itself but is dynamically passed and loaded when the unikernel instance runs. While it is common for unikernels to have the application code integrated into the kernel image, Unikraft's filesystem design allows for dynamically loading external filesystems at runtime as shown above. We will look at the mechanism of the Unikraft filesystem in more detail in the following section.

Unikraft Filesystem

Note: This section is a summary of the following article:

https://unikraft.org/docs/cli/filesystem

In a typical OS, a root filesystem is required at /. However, since the libraries necessary for operation are componentized in Unikraft, it is not strictly necessary to set up a root filesystem if the application does not need to read or write files. That said, in most cases, some file I/O occurs, so a unikernel instance can be considered to require a root filesystem. To meet these needs while optimizing performance, Unikraft supports the following three root filesystem formats (the fourth is omitted as it is a combination of the three):

  • Initial Ramdisk Filesystem (initramfs)
  • Embedded Initial Ramdisk Filesystems (einitrds)
  • External Volumes


Variations in combinations of unikernel binary applications and root filesystems. Quoted from Filesystem

  • initramfs
    • The initramfs is a format where the filesystem is passed via the initrd option during QEMU execution. In Unikraft, this can be treated as the root filesystem. Although it is a filesystem, since it is held in memory in Unikraft, it is said to have superior performance compared to the embedded initramfs described below.
  • Embedded initramfs
    • The embedded initramfs (einitrd) is a format where the root filesystem is embedded directly into the kernel image when building the unikernel. Since the root filesystem is already integrated into the kernel image in this format, it has the advantage of customizability—for example, you can specify a different initramfs via the initrd option when starting with QEMU to override only part of the filesystem. On the other hand, since the original filesystem is built into the kernel, it comes with the trade-off of requiring a kernel rebuild to make changes.
  • external volumes
    • External volumes are used for purposes such as data persistence or sharing data between multiple unikernel instances, similar to volumes (storage) in the context of containers. Also, like the docker command, you can use -v to map a directory or file from the host side to the unikernel instance side.
kraft run -v ./html:/nginx/html unikraft.org/nginx:1.25

Considering the examples seen in "What is Happening Behind the Scenes of the Build," in the case of nginx, the kernel image alone was passed to the QEMU -kernel option because the initramfs was embedded in the kernel image. In contrast, for the Go HTTP server, an initramfs was passed via -initrd in addition to the base kernel image. Taking the filesystem mechanism into account, it can be inferred that nginx uses an embedded initramfs where the filesystem is built into the kernel image, while the Go HTTP server uses a base kernel image and passes an initramfs via -initrd at instance startup to override part of the filesystem.

Supporting both the embedding of a filesystem into a kernel image and the dynamic setting of a filesystem during unikernel instance execution is not only convenient but also provides the benefit of streamlining the build process. As you can see if you try it yourself, a unikernel build where no runtime is specified—like with nginx—takes a fair amount of time because it involves building the Dockerfile for rootfs creation plus building the kernel. For example, measuring the build time with the time command took 46 seconds (with Docker cache).
On the other hand, the kraft build for the sample code under examples only performs the Dockerfile build, which shortens the build time. Measuring the Go HTTP server in the same way, it completed in about 3 seconds (with Docker cache).

# nginx
$ time kraft build --plat qemu --arch x86_64

kraft build --plat qemu --arch x86_64  45.49s user 8.58s system 186% cpu 29.011 total


# go http server
$ time kraft build --plat qemu --arch x86_64

kraft build --plat qemu --arch x86_64  2.33s user 0.89s system 42% cpu 7.571 total

In actual development, tasks like modifying code or the Dockerfile followed by executing a build tend to occur frequently, so the efficiency gained by omitting the kernel image build has a significant impact.

The Unikraft filesystem is designed with performance, resource constraints, and security in mind, allowing it to support a variety of use cases by combining the three formats mentioned above. At the same time, it can be said to have the aspect of streamlining the build process by allowing kernel image rebuilds only when necessary.

Managing unikernel instances with Docker + urunc

Those who are already familiar with containers might want to manage unikernel instances using container management tools like Docker or nerdctl. Since unikernels have a distinctly different mechanism from containers, they do not have a container runtime in the traditional sense. However, by using runu, an alternative to runc, you can operate unikernel instances from container management tools like Docker. Since runu is also part of the Unikraft toolkit, the necessary steps are described below:

https://unikraft.org/docs/getting-started/integrations/container-runtimes

runu is developed in the kraftkit repository, which also includes the kraft command, and can be downloaded from the release page. However, when I tried it myself, the application did not work correctly due to errors following the documentation's steps. Therefore, instead of runu, I will use urunc, another container runtime specifically for unikernels.

https://urunc.io/

First, install urunc following the installation instructions.

RUNC_VERSION=$(curl -L -s -o /dev/null -w '%{url_effective}' "https://github.com/opencontainers/runc/releases/latest" | grep -oP "v\d+\.\d+\.\d+" | sed 's/v//')
wget -q https://github.com/opencontainers/runc/releases/download/v$RUNC_VERSION/runc.$(dpkg --print-architecture)
sudo install -m 755 runc.$(dpkg --print-architecture) /usr/local/sbin/runc
rm -f ./runc.$(dpkg --print-architecture)

There are several methods for building unikernels with urunc described, but here we will use the method of building a unikernel containing the application into an OCI-format image. For constructing the unikernel, it seems easiest to use bunny, which was developed to simplify the unikernel construction process, so download it from the repository's releases.

As an example, we will build and run the Go HTTP server provided in the application catalog as an OCI image. The files are the same, but the runtime in the Kraftfile has been changed to mytest:base, which was built in the local environment.

Kraftfile
spec: v0.6

name: httpserver-go1.21

#runtime: base:latest
runtime: mytest:base

rootfs: ./Dockerfile

cmd: ["/server"]

To turn an application into an OCI image using bunny, you write settings in a YAML file called a bunnyfile. Referring to the syntax described in the documentation, we will add the following:

bunnyfile
#syntax=harbor.nbfc.io/nubificus/bunny:latest
version: v0.1

platforms:
  framework: unikraft
  monitor: qemu
  architecture: x86

rootfs:
  from: local
  path: .unikraft/build/initramfs-x86_64.cpio

kernel:
  from: local
  path: ./base_qemu-x86_64

cmdline: "/server"

The key points are as follows:

  • rootfs: Root filesystem (initramfs)
    • from: Specify local as we are using a local file.
    • path: Specify the path to the initramfs (cpio archive). Create this in advance by running kraft build.
  • kernel: Kernel image
    • from: Specify local as we are using a local file.
    • path: Specify the path to the kernel generated during the mytest/base build. Create it with kraft build and copy it to the working directory in advance.
  • cmdline: Specify the command to run within the unikernel instance. Here, we execute the server binary located in /.

The directory structure at this point is as follows:

.
├── .unikraft
│   ├── build
│   │   └── initramfs-x86_64.cpio
├── Dockerfile
├── Kraftfile
├── README.md
├── base_qemu-x86_64
├── bunnyfile
└── server.go

Build the OCI image with docker build -f bunnyfile -t [image_name] ..

$ docker build -f bunnyfile -t urunc-test .
[+] Building 3.4s (9/9) FINISHED                                                                                                                                                                                               docker:default
 => [internal] load build definition from bunnyfile                                                                                                                                                                                      0.0s
 => => transferring dockerfile: 307B                                                                                                                                                                                                     0.0s
 => resolve image config for docker-image://harbor.nbfc.io/nubificus/bunny:latest                                                                                                                                                        2.6s
 => CACHED docker-image://harbor.nbfc.io/nubificus/bunny:latest@sha256:5cfa082f077dce1ed11819c8ccd93c2075bd985bfc767723a39b31d149c08b0b                                                                                                  0.1s
 => => resolve harbor.nbfc.io/nubificus/bunny:latest@sha256:5cfa082f077dce1ed11819c8ccd93c2075bd985bfc767723a39b31d149c08b0b                                                                                                             0.0s
 => Internal:Read-bunnyfile                                                                                                                                                                                                              0.0s
 => => transferring context: 31B                                                                                                                                                                                                         0.0s
 => local://context                                                                                                                                                                                                                      0.0s
 => => transferring context: 5.67kB                                                                                                                                                                                                      0.0s
 => CACHED copy /base_qemu-x86_64 /.boot/kernel                                                                                                                                                                                          0.0s
 => CACHED copy /.unikraft/build/initramfs-x86_64.cpio /.boot/rootfs                                                                                                                                                                     0.0s
 => CACHED mkfile /urunc.json                                                                                                                                                                                                            0.0s
 => exporting to image                                                                                                                                                                                                                   0.2s
 => => exporting layers                                                                                                                                                                                                                  0.0s
 => => exporting manifest sha256:b6bb2683ddb2bdbd1e11e33103a02528b87d7052d75b287ca79175a417c64a6b                                                                                                                                        0.0s
 => => exporting config sha256:a64b4081e14780c8d3a3b7374fc06bacfa0619caa4b2ca3bda5c8efe3f7cbeb4                                                                                                                                          0.0s
 => => exporting attestation manifest sha256:b9e2b4d4b68503295d90bafef6997bf63cc03e94c98b377eff7f63fc11c3a760                                                                                                                            0.1s
 => => exporting manifest list sha256:2444d8fb459975dad004751e839c444fe3ce7e2c4e5b3fd2fc458157752908e2                                                                                                                                   0.0s
 => => naming to docker.io/library/urunc-test:latest                                                                                                                                                                                     0.0s
 => => unpacking to docker.io/library/urunc-test:latest                                                                                                                                                                                  0.0s

This builds a Docker image for the unikernel instance.

$ docker image ls
REPOSITORY                                           TAG       IMAGE ID       CREATED         SIZE
urunc-test                                           latest    2444d8fb4599   5 days ago      17.8MB

To start it, execute docker run by specifying --runtime io.containerd.urunc.v2.
Once started, the same output as when running kraft run is displayed, and the unikernel instance starts on QEMU.

$ docker run --rm -it -p 8080:8080 --runtime io.containerd.urunc.v2 urunc-test

SeaBIOS (version 1.15.0-1)


iPXE (https://ipxe.org) 00:02.0 C000 PCI2.10 PnP PMM+0FF8B290+0FECB290 C000



Booting from ROM..1: Set IPv4 address 172.17.0.3 mask 255.255.255.0 gw 172.17.0.1
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)
Listening on :8080...

Confirm accessibility by executing curl from another terminal.

$ curl 0.0.0.0:8080
Bye, World!

Running unikernel instances can be checked with docker ps.

$ docker ps
CONTAINER ID   IMAGE                  COMMAND       CREATED          STATUS          PORTS                                         NAMES
273d6e6e94f6   urunc-test             "/server"     43 seconds ago   Up 42 seconds   0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp   boring_mestorf

Looking at the process tree, we can see that the unikernel instance is being executed by QEMU from the container started with urunc.

$ ps auxwwf

root       47170  0.0  0.2 1233544 9544 ?        Sl   03:53   0:00 /usr/local/bin/containerd-shim-urunc-v2 -namespace moby -id 273d6e6e94f6562249fec21810acb09225c771fb21cf723e0b4b5b98f5c139b1 -address /run/containerd/containerd.sock
root       47190  0.5  2.5 489056 102088 ?       Ssl  03:53   0:00  \_ /usr/bin/qemu-system-x86_64 -m 256M -L /usr/share/qemu -cpu host -enable-kvm -nographic -vga none --sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -kernel /.boot/kernel -net nic,model=virtio,macaddr=7e:02:05:bd:c2:d4 -net tap,script=no,downscript=no,ifname=tap0_urunc -initrd /.boot/rootfs -append Unikraft  env.vars=[ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=273d6e6e94f6 TERM=xterm ] netdev.ip=172.17.0.3/24:172.17.0.1:8.8.8.8   vfs.fstab=[ "initrd0:/:extract:::" ] -- /server

Containers can be deleted as usual with docker rm [container_name]. Alternatively, killing the QEMU process will also stop the container.
Putting aside the small details, tasks for running unikernel instances with kraft run can be replaced almost entirely with docker commands.


Of course, you can also run it in the same way with nerdctl (containerd) by specifying the runtime option.

$ sudo nerdctl build -f bunnyfile -t urunc-test .
$ sudo nerdctl run --rm -it -p 8080:8080 --runtime io.containerd.urunc.v2 urunc-test
$ sudo nerdctl ps
CONTAINER ID    IMAGE                                  COMMAND      CREATED           STATUS    PORTS                     NAMES
aef7c79ae053    docker.io/library/urunc-test:latest    "/server"    30 seconds ago    Up        0.0.0.0:8080->8080/tcp    urunc-test-aef7c

$ sudo nerdctl rm -f urunc-test-aef7c

By the way, it is also said that they can be run on Kubernetes by creating a custom runtimeClass resource.

https://urunc.io/tutorials/How-to-urunc-on-k8s/

Inside the Image

By the way, what does the content of the OCI image built with urunc look like? Let's extract the image locally and take a look.

docker save urunc-test:latest > urunc.tar
tar xvf urunc.tar

The directory structure complies with OCI.

.
├── blobs
│   └── sha256
│       ├── 2444d8fb459975dad004751e839c444fe3ce7e2c4e5b3fd2fc458157752908e2
│       ├── 692fcf5bb27916759fb735771aed6628f424df307fd06ab1af6621b17ff6dd0c
│       ├── 91bb784307c2981bdfbb0f67c7d221805040a3af406851c0f9f067c71a5b9768
│       ├── 964acd7b2bb356db3b320d9a959159d4b1710edf5f0403b406b326c0bb496399
│       ├── 9b230e579b70350e5c8d9c4dc55691c34e14fa25c48f7d8364a32db1cbd98adf
│       ├── a64b4081e14780c8d3a3b7374fc06bacfa0619caa4b2ca3bda5c8efe3f7cbeb4
│       ├── b6bb2683ddb2bdbd1e11e33103a02528b87d7052d75b287ca79175a417c64a6b
│       ├── b7c416926de0a1387ed1a20d51dfc003956d40d1d228fee733ae7d6e6d5ea28a
│       └── b9e2b4d4b68503295d90bafef6997bf63cc03e94c98b377eff7f63fc11c3a760
├── index.json
├── manifest.json
├── oci-layout
└── urunc.tar

The index and manifest are slightly different compared to regular images.

index.json
schemaVersion: 2
mediaType: application/vnd.oci.image.index.v1+json
manifests:
  - mediaType: application/vnd.oci.image.index.v1+json
    digest: sha256:2444d8fb459975dad004751e839c444fe3ce7e2c4e5b3fd2fc458157752908e2
    size: 856
    annotations:
      io.containerd.image.name: docker.io/library/urunc-test:latest
      org.opencontainers.image.ref.name: latest
manifest.json
- Config: blobs/sha256/a64b4081e14780c8d3a3b7374fc06bacfa0619caa4b2ca3bda5c8efe3f7cbeb4
  RepoTags:
    - urunc-test:latest
  Layers:
    - blobs/sha256/692fcf5bb27916759fb735771aed6628f424df307fd06ab1af6621b17ff6dd0c
    - blobs/sha256/9b230e579b70350e5c8d9c4dc55691c34e14fa25c48f7d8364a32db1cbd98adf
    - blobs/sha256/b7c416926de0a1387ed1a20d51dfc003956d40d1d228fee733ae7d6e6d5ea28a

The structure of b6bb26 within the blob is easy to understand.

schemaVersion: 2
mediaType: application/vnd.oci.image.manifest.v1+json
config:
  mediaType: application/vnd.oci.image.config.v1+json
  digest: sha256:a64b4081e14780c8d3a3b7374fc06bacfa0619caa4b2ca3bda5c8efe3f7cbeb4
  size: 1064
layers:
  - mediaType: application/vnd.oci.image.layer.v1.tar+gzip
    digest: sha256:692fcf5bb27916759fb735771aed6628f424df307fd06ab1af6621b17ff6dd0c
    size: 723475
  - mediaType: application/vnd.oci.image.layer.v1.tar+gzip
    digest: sha256:9b230e579b70350e5c8d9c4dc55691c34e14fa25c48f7d8364a32db1cbd98adf
    size: 5234789
  - mediaType: application/vnd.oci.image.layer.v1.tar+gzip
    digest: sha256:b7c416926de0a1387ed1a20d51dfc003956d40d1d228fee733ae7d6e6d5ea28a
    size: 252
annotations:
  com.urunc.unikernel.binary: /.boot/kernel
  com.urunc.unikernel.cmdline: /server
  com.urunc.unikernel.hypervisor: qemu
  com.urunc.unikernel.initrd: /.boot/rootfs
  com.urunc.unikernel.mountRootfs: "false"
  com.urunc.unikernel.unikernelType: unikraft

The image is composed of three layers, which seem to contain the kernel image, the initramfs (rootfs), and the urunc configuration JSON in that order.

$ mkdir 692
$ tar zxvf 692fcf5bb27916759fb735771aed6628f424df307fd06ab1af6621b17ff6dd0c -C 692
.boot/
.boot/kernel
$ tar zxfv 9b230e579b70350e5c8d9c4dc55691c34e14fa25c48f7d8364a32db1cbd98adf -C 9b2
.boot/
.boot/rootfs
$ mkdir b7c
$ tar zxfv  b7c416926de0a1387ed1a20d51dfc003956d40d1d228fee733ae7d6e6d5ea28a -C b7c
urunc.json

$ cat b7c/urunc.json | yq -P
com.urunc.unikernel.binary: Ly5ib290L2tlcm5lbA==
com.urunc.unikernel.cmdline: L3NlcnZlcg==
com.urunc.unikernel.hypervisor: cWVtdQ==
com.urunc.unikernel.initrd: Ly5ib290L3Jvb3Rmcw==
com.urunc.unikernel.mountRootfs: ZmFsc2U=
com.urunc.unikernel.unikernelType: dW5pa3JhZnQ=

The rest corresponds to Provenance attestations generated by BuildKit.

_type: https://in-toto.io/Statement/v0.1
predicateType: https://slsa.dev/provenance/v0.2
subject:
  - name: pkg:docker/urunc-test@latest?platform=linux%2Famd64
    digest:
      sha256: b6bb2683ddb2bdbd1e11e33103a02528b87d7052d75b287ca79175a417c64a6b
predicate:
  builder:
    id: ""
  buildType: https://mobyproject.org/buildkit@v1
  materials:
    - uri: pkg:docker/harbor.nbfc.io/nubificus/bunny@latest
      digest:
        sha256: 5cfa082f077dce1ed11819c8ccd93c2075bd985bfc767723a39b31d149c08b0b
  invocation:
    configSource:
      entryPoint: bunnyfile
    parameters:
      frontend: gateway.v0
      args:
        cmdline: harbor.nbfc.io/nubificus/bunny:latest
        source: harbor.nbfc.io/nubificus/bunny:latest
      locals:
        - name: context
    environment:
      platform: linux/amd64
  metadata:
    buildInvocationID: l0b1ktu7bxabnx09g83dy76hn
    buildStartedOn: "2025-09-22T03:50:51.232975136Z"
    buildFinishedOn: "2025-09-22T03:50:54.279964741Z"
    completeness:
      parameters: true
      environment: true
      materials: false
    reproducible: false
    https://mobyproject.org/buildkit@v1#metadata:
      vcs:
        localdir:context: examples/httpserver-go1.21
        localdir:dockerfile: examples/httpserver-go1.21
        revision: 5a22d73139c02021e863f637383e1dc94d965564
        source: https://github.com/unikraft/catalog

In images built with Docker, the entity of the layers is the filesystem corresponding to the commands defined in the Dockerfile. However, in OCI images built with urunc, the kernel image and rootfs specified at build time are included directly, which is a major difference.

Miscellaneous

Running unikernel instances with Firecracker

In all of the verifications above, we used QEMU as the VMM, but you can also use Firecracker instead. Since the procedure is described below, let's try it out according to these steps.

https://unikraft.org/guides/catalog-using-firecracker

The version of Firecracker mentioned in the procedure is quite old, so we will install the latest version, v1.13, at this time.

cd /tmp
wget https://github.com/firecracker-microvm/firecracker/releases/download/v1.13.1/firecracker-v1.13.1-x86_64.tgz
tar zxvf firecracker-v1.13.1-x86_64.tgz
sudo cp release-v1.13.1-x86_64/firecracker-v1.13.1-x86_64 /usr/local/bin/firecracker

Download the base for Firecracker. --plat fc corresponds to Firecracker.

kraft pkg pull -w base unikraft.org/base:latest --plat fc --arch x86_64

Build the kernel for the nginx application.

cd catalog/library/nginx/1.25
kraft build --plat fc --arch x86_64 .

Create a network interface for Firecracker.

sudo ip tuntap add dev tap0 mode tap
sudo ip address add 172.45.0.1/24 dev tap0
sudo ip link set dev tap0 up

Create fc-x86_64.json. Specify the built kernel .unikraft/build/nginx_fc-x86_64 in kernel_image_path, etc.

fc-x86_64.json
{
  "boot-source": {
    "kernel_image_path": ".unikraft/build/nginx_fc-x86_64",
    "boot_args": ".unikraft/build/nginx_fc-x86_64 netdev.ip=172.45.0.2/24:172.45.0.1 -- /usr/bin/nginx"
  },
  "drives": [],
  "machine-config": {
    "vcpu_count": 1,
    "mem_size_mib": 128,
    "smt": false,
    "track_dirty_pages": false
  },
  "cpu-config": null,
  "balloon": null,
  "network-interfaces": [
    {
      "iface_id": "net1",
      "guest_mac":  "06:00:ac:10:00:02",
      "host_dev_name": "tap0"
    }
  ],
  "vsock": null,
  "logger": {
    "log_path": "/tmp/firecracker.log",
    "level": "Debug",
    "show_level": true,
    "show_log_origin": true
  },
  "metrics": null,
  "mmds-config": null,
  "entropy": null
}

Create the log file and execute.

$ touch /tmp/firecracker.log
$ sudo firecracker --api-sock /tmp/firecracker.socket --config-file fc-x86_64.json
2025-09-20T08:57:04.317170231 [anonymous-instance:main] Running Firecracker v1.13.1
2025-09-20T08:57:04.317261152 [anonymous-instance:main] Listening on API socket ("/tmp/firecracker.socket").
1: Set IPv4 address 172.45.0.2 mask 255.255.255.0 gw 172.45.0.1
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)

Execute curl targeting the above IP address from another terminal.

$ curl http://172.45.0.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

You can confirm that Firecracker-related processes are running using ps.

root      102247  0.0  0.1  11904  5652 pts/0    S+   08:57   0:00 sudo firecracker --api-sock /tmp/firecracker.socket --config-file fc-x86_64.json
root      102248  0.0  0.0  11904   896 pts/2    Ss   08:57   0:00 sudo firecracker --api-sock /tmp/firecracker.socket --config-file fc-x86_64.json
root      102249  0.7  1.1 139028 46092 pts/2    Sl+  08:57   0:00 firecracker --api-sock /tmp/firecracker.socket --config-file fc-x86_64.json

At this point, the flow is to generate the unikernel with the kraft build command and then execute it with the firecracker command, but as shown above, you can manage unikernel instances with VMMs other than QEMU.

compose command

The kraft command includes a kraft compose sub-command that corresponds to docker compose.

https://unikraft.org/docs/cli/reference/kraft/compose

Its purpose is the same as docker compose, used for declaratively describing and managing multiple unikernel instances together. Examples of compose files are hard to find, but the following might be useful. Based on the files, there seems to be compatibility with docker compose files.

https://github.com/unikraft-cloud/examples/blob/main/tyk/compose.yaml
https://github.com/unikraft-cloud/examples/blob/main/wordpress-compose/compose.yaml
https://github.com/unikraft-cloud/examples/blob/main/nginx-flask-mongo/compose.yaml

app elfloader

In general unikernels, to execute an application, it must be integrated into the unikernel itself and built as a kernel image; you cannot simply take an ELF file executable on Linux and run it on a unikernel as is. On the other hand, according to the verifications seen so far—for example, in the Flask sample—a rootfs containing the Python application was created from a Dockerfile and passed to QEMU along with an existing unikernel image. This makes it look like the application is running on a unikernel instance without being specifically built for it, but in reality, Unikraft introduces a layer called the binary-compatibility layer to achieve compatibility for executing Linux ELFs. This is described to some extent in the compatibility documentation below:

https://unikraft.org/docs/concepts/compatibility

The core parts seem to be the following two:

  • syscall shim: Responsible for mapping system calls to Unikraft functions.
  • app-elfloader: Responsible for loading and analyzing Linux ELFs on unikernels.

The syscall shim is a library, so it's a bit hard to visualize, but app-elfloader has its own repository, making it slightly easier to understand. In short, since it handles the role of executing Linux ELFs on unikernels, it is expected that applications will not function on unikernels without it. Indeed, looking at the code under library in the catalog repository, you can confirm that app-elfloader is loaded as a template in most Kraftfiles.

Kraftfile
template:
  source: https://github.com/unikraft/app-elfloader.git
  version: staging

Conversely, if app-elfloader is not included, it seems like an error will occur because ELFs cannot be executed. Let's verify this by commenting out that part in the Kraftfile and building it.

$ cd catalog/library/nginx/1.25

# Comment out the above part using vim, etc.
$ vim Kraftfile

# Delete previous cache and build settings
$ rm .config.nginx_*
$ rm -rf .unikraft

# Build
$ kraft build --plat qemu --arch x86_64

# Start the unikernel instance
$ kraft run -W --rm -p 8080:8080 --plat qemu --arch x86_64 .
 i  using arch=x86_64 plat=qemu
en1: Added
en1: Interface is up
Powered by Unikraft Kiviuq (0.20.0~5a22d73)
weak main() called. Symbol was not replaced!

As expected, the execution of nginx fails. Note that the weak main() called. Symbol was not replaced! error occurs because Unikraft's weak main is executed due to the elfloader not being loaded.

Pushing and Pulling Packages to/from OCI Repositories

The kraft pkg command allows you to package kernel images, and adding the --as oci flag saves them in OCI format. Since these are treated as OCI images, they can be pushed to and pulled from container registries that support OCI images, such as ghcr.io.
Let's test this using ghcr.io.

First, log in to the container registry with kraft login ghcr.io -u USERNAME -t [github token]. Note that no message is displayed regardless of success or failure.

kraft login ghcr.io -u USERNAME -t [token]

Set the package name to ghcr.io/[username]/[image_name] and push it with kraft pkg push.

kraft pkg --as oci  --name ghcr.io/git-ogawa/flask:latest
kraft pkg push ghcr.io/git-ogawa/flask:latest

Once the push is complete, you can pull it with other container management tools like Docker. Note that since the platform is registered as [vmm]/[arch] at the time of build, an error will occur if you don't explicitly specify it with the --platform flag, as it won't be able to find the image.

$ docker pull --platform qemu/x86_64 ghcr.io/git-ogawa/flask

Supported VMMs and Architectures

This is described in the Compatibility section of the kraftkit repository. Currently, only Qemu, Xen, and Firecracker are supported.

https://github.com/unikraft/kraftkit?tab=readme-ov-file#compatibility

References

Discussion