K3s HA running in AWS
Single server clusters can meet a variety of use cases, but for environments where uptime of the Kubernetes control plane is critical, you can run K3s in an HA configuration. An HA K3s cluster is comprised of:
- Two or more server nodes that will serve the Kubernetes API and run other control plane services
- An external datastore (as opposed to the embedded SQLite datastore used in single-server setups)
Requirements
- Create and config 3 Ec2 instances (Linux machine).
- Experience in Kubernetes
- Desire to learn about k3s
Server node
Run command
curl -sfL https://get.k3s.io | sh -s — — write-kubeconfig-mode 644
Get the server token running the command
cat /var/lib/rancher/k3s/server/node-token
Workers nodes
- Create 2 new servers, those servers will be called
workers
.
- Connect to worker-1
Create the env variables:
K3S_URL=https://myserver:6443 (Public URL from your server)
K3S_TOKEN=mytoken
To install on worker nodes and add them to the cluster, run the installation script with the K3S_URL
and K3S_TOKEN
environment variables. Here is an example showing how to join a worker node:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mytoken sh -
Go to worker-2 and repeat the process.
Verify the nodes
Go to the server node and run:
k3s kubectl get node
You must see the node server and 2 workers working and running.
Create your local Kube config
Keep in the server node and run:
cat /etc/rancher/k3s/k3s.yaml
copy the result.
In your local machine, run
nano ~/.kube/config
and paste the result got from the server node.
Done, you can interact with your cluster from the local machine. You are ready to apply for your first deployment in K3s. Good luck