Skip to content

On Bare Metal with Docker

More than a documentation, this is an example of installing Tinkerbell in a homelab. The homelab is made of 10 Intel NUCs, with one of them picked to be the Provisioner machine running:

  1. Nginx
  2. Tink Server
  3. Tink CLI
  4. PostgreSQL
  5. And everything that runs as part of the docker-compose in sandbox

This page is inspired by Aaron a community member who wrote ["Tinkerbell or iPXE boot on OVH"].

In this project we will use Sandbox and everything it depends on. Pick a server, a laptop, or as in this example, an Intel NUC.

This guide also provides a little more of an explanation with very little automation for what happens under the hood in guides like:


This guide assumes:

  • You are familiar with the underline operating system you decided to use.
  • You can access the device where you want to install Tinkerbell Provisioner using SSH or Serial console.

Getting Tinkerbell

To get Tinkerbell, clone the sandbox repository or download the latest release. At time of writing it is v0.5.0.

git clone

git clone

archive download

LATEST_VERSION=$(curl -s${ORG_NAME}/${REPO_NAME}/releases/latest | grep "tag_name" | cut -d'v' -f2 | cut -d'"' -f1)
curl -L -o ${REPO_NAME}.tar.gz${ORG_NAME}/${REPO_NAME}/archive/v${LATEST_VERSION}.tar.gz
tar xf sandbox.tar.gz
cd sandbox-<version> # something like sandbox-0.5.0

In this case we are using the latest sandbox release that today is v0.4.0. It is important to checkout a specific version and have a look at the changelog when you update. Tinkerbell is under development, but we guarantee as best as we can that tags are good and working end-to-end.

Generate the Configuration File

The sandbox sets up Tinkerbell using the script. relies on a .env file that can be generated running the command:

./ <network-interface> > .env

In this case, the network-interface is eth1. The output of this command will be stored inside ./.env. It will look like this:

# Tinkerbell Stack version

export OSIE_DOWNLOAD_LINK=,c=1aec189,b=master.tar.gz

# Network interface for Tinkerbell's network

# Decide on a subnet for provisioning.
# Tinkerbell should "own" this network space.
# Its subnet should be just large enough to be able to provision your hardware.

# Host IP is used by provisioner to expose different services such as
# tink, boots, etc.
# The host IP should the first IP in the range, and the Nginx IP
# should be the second address.

# Tink server username and password
export TINKERBELL_TINK_PASSWORD="1efbd196ae2fa3037c25983b1bc46e4c1230d270d21ed522e83a820192677360"

# Docker Registry's username and password
export TINKERBELL_REGISTRY_PASSWORD="e32a696ef314bf10a1e17ff94f08ee711cb9a108667f9739e9c0cee0fadb0e76"

# Tink cli options

# Legacy options, to be deleted:
export FACILITY=onprem
export ROLLBAR_TOKEN=ignored

The ./.env file has some explanatory comments, but there are a few things to note about the contents. The environment variables in the Tinkerbell Stack version block pin the various parts of the stack to a specific version. You can think of it as a release bundle.

If you are developing or you want to test a different version of a particular tool let's say Hegel, you can build and push a docker image, replace TINKERBELL_TINK_HEGEL_IMAGE with your tag and you are good to go.

Tinkerbell needs a static and predictable IP, that's why the script specifies and sets its own with TINKEBELL_HOST_IP. It is used by Boots to serve osie, for example. And Sandbox provisions (via Docker Compose) an Nginx server that you can use to serve any file you want (OSIE is served via that Nginx).

If your Tinkerbell host IP and LAN CIDR is different than, you can set the following environment variables before running the script:


Install Dependencies

The script does a bunch of manipulation to your local environment, so first we need to install the required dependencies:


sudo apt-get update
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    git \
    gnupg-agent \
    ifupdown \
    jq \
    software-properties-common \

curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli

sudo curl -L \
    "$(uname -s)-$(uname -m)" \
    -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose


sudo yum install -y yum-utils jq ifupdown iproute
sudo yum-config-manager \
    --add-repo \
yum install docker-ce docker-ce-cli
sudo systemctl start docker

Run the Setup Script

Before running the script, there are a few handy things to know about it.

The script's main responsibility is to setup the network. It creates a certificate that will be used to setup the registry (this will change soon). It downloads OSIE and places it inside the Nginx weboot (./deploy/state/webroot/).

You can use the webroot for your own purposes, it is part of gitignore and other than OSIE you can serve other operating systems that you want to install in your other servers, or even public ssh keys (whatever you need a link for).

If you're managing machines on a physical network (as in, not Vagrant VMs), you can set the environment variable TINKERBELL_SKIP_NETWORKING to a non-empty value to bypass virtual networking setup.

Now to execute

Load the configuration file:

source ./.env

and run it:

sudo ./

At the end of the command you have everything you need to start up the Tinkerbell Provisioner Stack and we use docker-compose for that.

cd deploy
docker-compose up -d

Time to Party

At this point let me point you to the Local Setup with Vagrant[local vagrant setup] guide because you have everything you need to play with Tinkerbell. Enjoy!