Deploy kubernetes cluster on AWS EC2 instances using Ansible #30
August 16, 2021
GitHub Project link: https://github.com/Priyankasaggu11929/ansible-k8s-cluster-deploy
Contents
Introduction
The following document demonstrates the process and the steps followed, to configure a Kubernetes cluster, on AWS EC2 instances.
I have used Ansible
playbooks to automate the provisioning of AWS EC2 instances, the security-group & key pairs, and the further process of initiating & bootstrapping the kubernetes cluster on EC2 instances (as master
& worker
nodes) using the kubeadm
tool.
Prerequistes
-
Create a SSH key pair (in my case, I’ve named it
ansible
)$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /user/.ssh/ansible Enter passphrase (empty for no passphrase): Enter same passphrase again:
Tools and Software
Before laying down steps/instructions for provisioning the cluster, here’s a compiled list of the tools/software/services I used.
On the Local Machine
- AWS CLI (configured with the aws account credentials & set to the required region)
- region: ap-southeast-2
- ami-id: ami-0f39d06d145e9bb63 (Ubuntu Server 18.04 LTS (HVM))
- instance type: t3.small
- Ansible - core 2.11.3
- amazon.aws.ec2 module
- Python 3.9.5
- pip 21.2.1
- Python modules:
On the AWS EC2 instances
Project description
The project uses Ansible to automate the provisioning of Amazon EC2 machines (& other required resources), as well further process of bootstrapping the kubernetes cluster.
The project is using the following 2 ansible playbook(s) :
- create-cluster.yml: create AWS EC2 instances, & bootstrap them as cluster nodes using the tool
kubeadm
. - delete-cluster.yml: decommission the kubernetes cluster by deleting the EC2 instances along with the security-group & key pairs. Also, clean the locally saved private keys, cluster kubeconfig file and the ansible hosts inventory.
create-cluster.yml
does the following in order:
- In the specified AWS account (and specified region), it creates an EC2 key pair, further saving the private key to the
keys
directory on localhost (in the project directory). - Determines the default VPC and its subnets, in the
ap-southeast-2
region. Then randomly select a subnet from the list to host the EC2 instances. - Create a security group to be attached to the EC2 instances.
- In the above selected subnet, it creates two EC2 instances (to be
master
&worker
nodes later) associated with the security group (created above). - Updates the
inventory/ec2
hosts file with the new master & worker node’s host IPs. - Add the
ansible.pub
SSH public key to the remote master & worker hosts.
Next, it bootstraps the kubernetes cluster on the above ec2 instances
Setup cluster dependencies (kube-cluster/kube-dependencies.yml
):
- On both master & worker EC2 instances:
- Install
Docker
, the container runtime for the kubernetes cluster. - Install
apt-transport-https
, to allow adding external HTTPS sources to the APT source list. - Add the apt key of the Kubernetes APT repository for key verification
- Add the Kubernetes APT repository to the remote server APT sources list
- Install
Kubelet
andKubeadm
- Install
- On master EC2 instance:
- Install
kubectl
(as only Kubectl commands will be run from the master)
- Install
Initialize the cluster using Kubeadm
on the master node (kube-cluster/master.yml
):
- On master EC2 instance:
- Run
kubeadm init
in order to initialize the kubernetes cluster, passing an argument--pod-network-cidr = 10.244.0.0/16
to specify the private subnet from which the pod IPs will be assigned. - Create
/home/ubuntu/.kube
, to contain the kubernetes cluster configuration file. - Copy the
/etc/kubernetes/admin.conf
file generated bykubeadm init
to/home/ubuntu/.kube/config
- Install flannel (for pod network)
- Run
Add worker node to the cluster - kube-cluster/workers.yml
- On master EC2 instance:
- From the master node, grab the
kubeadm join ...
command usingkubeadm token create --print-join-command
& set it as an ansible artifact.
- From the master node, grab the
- On worker EC2 instance:
- Execute the above
kubeadm join
command to attach it as a worker node in the kubernetes clustee initiated during the previous steps.
- Execute the above
- On master EC2 instance:
- Copy the
kubeconfig
file from the master node to the local machine, at folderkubeconfig/admin.conf
in the active directory.
- Copy the
delete-cluster.yml
does the following in order:
- Delete the EC2 instances (master & worker nodes of the kubernets cluster)
- Delete the security group that was attached to the above instances
- Delete the Key pair
- Clean the
inventory/ec2
hosts file - Clean the locally saved EC2 private keys
- Clean the locally saved cluster kubeconfig file
Instructions to provision the cluster
Once, the tools and software listed above, are installed and setup on your local machine, follow the steps listed below.
[Step 1] Configure the AWS CLI
-
Run the following command to login into the provided aws account using aws-cli, providing the respective
AWS Access Key ID
,AWS Secret Access Key
, & requiredregion
name.aws configure
[Step 2] Clone the project
-
Run the following command to clone the project on your local machine:
git clone git@github.com:Priyankasaggu11929/ansible-k8s-cluster-deploy.git cd ansible-k8s-cluster-deploy/
[Step 3] Create the cluster
-
It will run the
create-cluster.yml
ansible playbookmake create-cluster
In case, you want to create a kubernetes cluster with multiple worker nodes, run the following command, providing the worker node count using the argument
worker=n
:For ex:
make create-cluster worker=2
[Step 4] Get the kube-config file
-
The kubeconfig file is copied from the kubernetes cluster’s master node running in AWS EC2 instance (during the Step 4)
make get_kubeconfig
[Step 5] Decommission the cluster & clean the project
-
It will run
delete-cluster.yml
ansible playbookmake delete_cluster