Cluster setup with Ansible
For this setup, four Raspberry Pi nodes are available on the network. We will configure one as both a network DNS server and as a test node for applying Ansible playbooks. We will configure the remaining three nodes to join our MicroK8s cluster as workers nodes.
Getting started
Ansible documentation introduces three main components of an Ansible environment:
- A control node, where Ansible is installed.
- Managed nodes, remote hosts that Ansible controls.
- Inventory, a list of logically organized managed nodes.
In this parlance, the WSL 2 machine will behave as the control node, and the four Raspberry Pi machines as managed nodes.
Creating the inventory
Ansible inventories defines the managed nodes to automate, with groups so you can run automation tasks on multiple hosts at the same time. With an inventory defined, you can use patterns to select the hosts or groups for Ansible to run against.
For example, with a cluster of 4 Raspberry Pi systems with a default user pi
,
an inventory might look like this:
[raspberrypis]
192.168.1.100
192.168.1.101
192.168.1.102
192.168.1.103
[raspberrypis:vars]
ansible_user=pi
The
ansible_user
ensures connections are established using thepi
user if the user on the control node is another username.
The default location for the inventory file is /etc/ansible/hosts
.
One can use another location and specify that file when running ansible
commands (here, with -i ~/.ansible/etc/hosts
):
ansible -i ~/.ansible/etc/hosts raspberrypis -m ping
192.168.1.102 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.101 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.103 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
A richer format for an inventory file is YAML. Here, we modify the previous example to provide named hosts for each IP:
raspberrypis:
hosts:
raspberrypi-0:
ansible_host: 192.168.1.100
raspberrypi-1:
ansible_host: 192.168.1.101
raspberrypi-2:
ansible_host: 192.168.1.102
raspberrypi-3:
ansible_host: 192.168.1.103
vars:
ansible_user: pi
The inventory we'll use throughout this documentation looks like:
pis:
children:
canary:
hosts:
raspberrypi-0:
ansible_host: 192.168.1.100
raspi3s:
hosts:
raspberrypi-0:
ansible_host: 192.168.1.100
raspberrypi-1:
ansible_host: 192.168.1.101
raspberrypi-2:
ansible_host: 192.168.1.102
raspberrypi-3:
ansible_host: 192.168.1.103
raspi4s:
hosts:
vatomouro-0:
ansible_host: 192.168.1.110
vatomouro-1:
ansible_host: 192.168.1.111
vatomouro-2:
ansible_host: 192.168.1.112
vatomouro-3:
ansible_host: 192.168.1.113
raspis:
children:
raspi3s:
raspi4s:
rockpis:
hosts:
vrachos-0:
ansible_host: 192.168.1.120
vrachos-1:
ansible_host: 192.168.1.121
vrachos-2:
ansible_host: 192.168.1.122
vrachos-3:
ansible_host: 192.168.1.123
vrachos-4:
ansible_host: 192.168.1.124
vrachos-5:
ansible_host: 192.168.1.125
vars:
ansible_user: rock
k8scontrol:
hosts:
vatomouro-0:
ansible_host: 192.168.1.110
vatomouro-1:
ansible_host: 192.168.1.111
vatomouro-2:
ansible_host: 192.168.1.112
k8sworkers:
hosts:
vatomouro-3:
ansible_host: 192.168.1.113
vrachos-0:
ansible_host: 192.168.1.120
vrachos-1:
ansible_host: 192.168.1.121
vrachos-2:
ansible_host: 192.168.1.122
k8sstorage:
hosts:
vrachos-3:
ansible_host: 192.168.1.123
vrachos-4:
ansible_host: 192.168.1.124
vrachos-5:
ansible_host: 192.168.1.125
k8s:
children:
k8scontrol:
k8sworkers:
k8sstorage:
vars:
ansible_user: pi
control_plane_endpoint: k8s.clunacy.dev
apiserver_vip: 192.168.1.150
pod_network_cidr: 10.90.0.0/16
ip_address_pool_addresses: 192.168.1.192/27
This inventory defines a raspberrypis
group that contains to child groups:
a canary
test group, and a k8s
group. All hosts are accessed as user pi
,
so we define ansible_user
accordingly.
Creating roles
Echoing the Roles documentation for Ansible:
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.
There are three roles needed to configure the Pis:
- A
common
role, for tasks shared by all nodes. Every node, for example, should be updated and upgraded. And every node should havesnap
andmicrok8s
installed. - A
dnsservers
role, unique to the test node that we will use as our local network DNS server. - A
workers
role, unique to thek8s
group, the nodes that will join themicrok8s
cluster as workers.
NOTE: The WSL 2 machine is behaving as both the MicroK8s cluster control plane and the Ansible control node.
The details of the common
and dnsservers
roles are not covered here, but
the implementations for each can be viewed in the
source code.
The tasks/main.yml
for the workers
is shown below:
--8<-- ansible/roles/workers/tasks/main.yml
Worth highlighting here are the presence of variables that Ansible can
populate either from variables files or the command line at runtime. We will
make use of the microk8s_instance
variable when running an associated
playbook.
Setting up playbooks
With our three roles defined, there are three associated playbooks to create:
site.yml
to update all nodes and installmicrok8s
,dns.yml
to create a DNS server (and to update all other nodes to use that server), andk8sworkers.yml
to joink8s
nodes to the MicroK8s cluster.
In each, we can refer to associated roles to run their tasks. For example,
the k8sworkers.yml
playbook looks like this:
The remaining two playbooks
source code.
In our k8sworkers.yml
playbook, we can see by default that k8s
hosts are
targeted from our inventory, but that variable_host
can be used to
override that setting at runtime. This is useful for testing configuration
changes on our canary
host before rolling out to k8s
hosts.
Running playbooks
Playbooks are run using the ansible-playbook
CLI:
ansible-playbook -h
For our playbooks, we'll run them with one other argument to indicate the
inventory file to use. For example, running the site.yml
playbook
from our ansible/
repository directory looks like this:
cd ansible/
ansible-playbook -i inventory.yml site.yml
The more interesting playbook is our k8sworkers.yml
playbook. To
add nodes to MicroK8s, the
follow command must first be run from cluster master node:
microk8s add-node
It outputs instructions like the following, which include URLs with unique tokens and associated expiry times:
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05 --worker
If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
microk8s join 10.23.209.1:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
microk8s join 172.17.0.1:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
To run our playbook, we first need to obtain a join URL from the MicroK8s
cluster master, and then provide that URL to the task in workers
role,
awaiting it via the variable microk8s_instance
(see Creating roles).
Running that playbook might look like this:
ansible-playbook -i inventory.yml k8sworkers.yml \
--extra-vars "microk8s_instance=192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05"
Here's a handy one-liner to both extend the token expiry time and capture the URL:
ansible-playbook -i inventory.yml k8sworkers.yml \
--extra-vars "microk8s_instance=$(microk8s add-node --token-ttl 3600 | grep microk8s | head -1 | cut -d' ' -f3)"