- Prerequisites
- Getting the Firecracker Binary
- Running Firecracker
- Building From Source
- Running the Integration Test Suite
- Appendix A: Setting Up KVM Access
- Appendix B: Setting Up Docker
If you need an opinionated way of running Firecracker, create an i3.metal
instance using Ubuntu 18.04 on EC2. Firecracker uses
KVM and needs read/write access that can be
granted as shown below:
sudo setfacl -m u:${USER}:rw /dev/kvm
The generic requirements are explained below:
-
Linux 4.14+
Firecracker currently supports physical Linux x86_64 and aarch64 hosts, running kernel version 4.14 or later. However, the aarch64 support is not feature complete (alpha stage).
-
KVM
Please make sure that:
- you have KVM enabled in your Linux kernel, and
- you have read/write access to
/dev/kvm
. If you need help setting up access to/dev/kvm
, you should check out Appendix A.
Click here to see a BASH script that will check if your system meets the basic requirements to run Firecracker.
err="";
[ "$(uname) $(uname -m)" = "Linux x86_64" ] \
|| [ "$(uname) $(uname -m)" = "Linux aarch64" ] \
|| err="ERROR: your system is not Linux x86_64 or Linux aarch64."; \
[ -r /dev/kvm ] && [ -w /dev/kvm ] \
|| err="$err\nERROR: /dev/kvm is innaccessible."; \
(( $(uname -r | cut -d. -f1)*1000 + $(uname -r | cut -d. -f2) >= 4014 )) \
|| err="$err\nERROR: your kernel version ($(uname -r)) is too old."; \
dmesg | grep -i "hypervisor detected" \
&& echo "WARNING: you are running in a virtual machine." \
&& echo "Firecracker is not well tested under nested virtualization."; \
[ -z "$err" ] && echo "Your system looks ready for Firecracker!" || echo -e "$err"
Firecracker is linked statically against musl, having no library dependencies. You can just download the latest binary from our release page, and run it on your x86_64 or aarch64 Linux machine.
On the EC2 instance, this binary can be downloaded as:
curl -LOJ https://github.com/firecracker-microvm/firecracker/releases/download/v${latest}/firecracker-v${latest}
Rename the binary to "firecracker":
mv firecracker-v${latest} firecracker
Make the binary executable:
chmod +x firecracker
If, instead, you'd like to build Firecracker yourself, you should check out the Building From Source section in this doc.
In production, Firecracker is designed to be run securely, inside
an execution jail, carefully set up by the jailer
binary. This is how
our
integration test suite does it.
However, if you just want to see Firecracker booting up a guest Linux
machine, you can do that as well.
First, make sure you have the firecracker
binary available - either
downloaded from our release page, or
built from source.
Next, you will need an uncompressed Linux kernel binary, and an ext4 file system image (to use as rootfs).
- To run an
x86_64
guest you can download such resources from: kernel and rootfs. - To run an
aarch64
guest, download them from: kernel and rootfs.
Now, let's open up two shell prompts: one to run Firecracker, and another one
to control it (by writing to the API socket). For the purpose of this guide,
make sure the two shells run in the same directory where you placed the
firecracker
binary.
In your first shell:
- make sure Firecracker can create its API socket:
rm -f /tmp/firecracker.socket
- then, start Firecracker:
./firecracker --api-sock /tmp/firecracker.socket
In your second shell prompt:
-
get the kernel and rootfs, if you don't have any available:
arch=`uname -m` dest_kernel="hello-vmlinux.bin" dest_rootfs="hello-rootfs.ext4" image_bucket_url="https://s3.amazonaws.com/spec.ccfc.min/img" if [ ${arch} = "x86_64" ]; then kernel="${image_bucket_url}/hello/kernel/hello-vmlinux.bin" rootfs="${image_bucket_url}/hello/fsfiles/hello-rootfs.ext4" elif [ ${arch} = "aarch64" ]; then kernel="${image_bucket_url}/aarch64/ubuntu_with_ssh/kernel/vmlinux.bin" rootfs="${image_bucket_url}/aarch64/ubuntu_with_ssh/fsfiles/xenial.rootfs.ext4" else echo "Cannot run firecracker on $arch architecture!" exit 1 fi echo "Downloading $kernel..." curl -fsSL -o $dest_kernel $kernel echo "Downloading $rootfs..." curl -fsSL -o $dest_rootfs $rootfs echo "Saved kernel file to $dest_kernel and root block device to $dest_rootfs."
-
set the guest kernel:
arch=`uname -m` kernel_path="hello-vmlinux.bin" if [ ${arch} = "x86_64" ]; then curl --unix-socket /tmp/firecracker.socket -i \ -X PUT 'http://localhost/boot-source' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d "{ \"kernel_image_path\": \"${kernel_path}\", \"boot_args\": \"console=ttyS0 reboot=k panic=1 pci=off\" }" elif [ ${arch} = "aarch64" ]; then curl --unix-socket /tmp/firecracker.socket -i \ -X PUT 'http://localhost/boot-source' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d "{ \"kernel_image_path\": \"${kernel_path}\", \"boot_args\": \"keep_bootcon console=tty1 reboot=k panic=1 pci=off\" }" else echo "Cannot run firecracker on $arch architecture!" exit 1 fi
-
set the guest rootfs:
rootfs_path="hello-rootfs.ext4" curl --unix-socket /tmp/firecracker.socket -i \ -X PUT 'http://localhost/drives/rootfs' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d "{ \"drive_id\": \"rootfs\", \"path_on_host\": \"${rootfs_path}\", \"is_root_device\": true, \"is_read_only\": false }"
-
start the guest machine:
curl --unix-socket /tmp/firecracker.socket -i \ -X PUT 'http://localhost/actions' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "action_type": "InstanceStart" }'
Going back to your first shell, you should now see a serial TTY prompting you
to log into the guest machine. If you used our hello-rootfs.ext4
image,
you can login as root
, using the password root
.
When you're done, issuing a reboot
command inside the guest will actually
shutdown Firecracker gracefully. This is due to the fact that Firecracker
doesn't implement guest power management.
Note: the default microVM will have 1 vCPU and 128 MiB RAM. If you wish to
customize that (say, 2 vCPUs and 1024MiB RAM), you can do so before issuing
the InstanceStart
call, via this API command:
curl --unix-socket /tmp/firecracker.socket -i \
-X PUT 'http://localhost/machine-config' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"vcpu_count": 2,
"mem_size_mib": 1024,
"ht_enabled": false
}'
The quickest way to build and test Firecracker is by using our development
tool (tools/devtool
). It employs a
per-architecture Docker container to store the software toolchain
used throughout the development process. If you need help setting up
Docker on your system, you can check out
Appendix B: Setting Up Docker.
Get a copy of the Firecracker sources by cloning our GitHub repo:
git clone https://github.com/firecracker-microvm/firecracker
All development happens on the master branch and we use git tags to mark releases. If you are interested in a specific release (e.g. v0.10.1), you can check it out with:
git checkout tags/v0.10.1
Within the Firecracker repository root directory:
- with the default musl target:
tools/devtool build
- using the gnu target:
tools/devtool build -l gnu
This will build and place the two Firecracker binaries at
build/debug/firecracker
and build/debug/jailer
. The default build profile
is debug
. If you want to build the release binaries (optimized and stripped
of debug info), use the --release
option:
tools/devtool build --release
Extensive usage information about devtool
and its various functions and
arguments is available via:
tools/devtool --help
The toolchain that Firecracker is tested against and that is recommended for
building production releases is the one that is automatically used by building
using devtool
. In this configuration, Firecracker is currently built as a
static binary linked against the musl libc
implementation.
Firecracker also builds using glibc toolchains, such as the default Rust toolchains provided in certain Linux distributions:
arch=`uname -m`
cargo build --target ${arch}-unknown-linux-gnu
That being said, Firecracker binaries built without devtool
are always
considered experimental and should not be used in production.
You can also use our development tool to run the integration test suite:
tools/devtool test
Please note that the test suite is designed to ensure our
SLA parameters as measured on EC2 .metal instances
and, as such, some performance tests may fail when run on a regular desktop
machine. Specifically, don't be alarmed if you see
tests/integration_tests/performance/test_process_startup_time.py
failing when
not run on an EC2 .metal instance. You can skip performance tests with:
./tools/devtool test -- --ignore integration_tests/performance
Some Linux distributions use the kvm
group to manage access to /dev/kvm
,
while others rely on access control lists. If you have the ACL package for your
distro installed, you can grant your user access via:
sudo setfacl -m u:${USER}:rw /dev/kvm
Otherwise, if access is managed via the kvm
group:
[ $(stat -c "%G" /dev/kvm) = kvm ] && sudo usermod -aG kvm ${USER} \
&& echo "Access granted."
If none of the above works, you will need to either install the file
system ACL package for your distro and use the setfacl
command as above,
or run Firecracker as root
(via sudo
).
You can check if you have access to /dev/kvm
with:
[ -r /dev/kvm ] && [ -w /dev/kvm ] && echo "OK" || echo "FAIL"
Note: If you've just added your user to the kvm
group via usermod
, don't
forget to log out and then back in, so this change takes effect.
To get Docker, you can either use the official Docker install instructions , or the package manager available on your specific Linux distribution:
-
on Debian / Ubuntu
sudo apt-get update sudo apt-get install docker.io
-
on Fedora / CentOS / RHEL / Amazon Linux
sudo yum install docker
Then, for any of the above, you will need to start the Docker daemon
and add your user to the docker
group.
sudo systemctl start docker
sudo usermod -aG docker $USER
Don't forget to log out and then back in again, so that the user change takes effect.
If you wish to have Docker started automatically after boot, you can:
sudo systemctl enable docker
We recommend testing your Docker configuration by running a lightweight test container and checking for net connectivity:
docker pull alpine
docker run --rm -it alpine ping -c 3 amazon.com