This content originally appeared on DEV Community and was authored by Abhay Krishna Arunachalam
AWS recently made the headlines with the launch of Amazon EKS Anywhere, a new and much-awaited deployment option for Amazon Elastic Kubernetes Service (EKS). But what is it and how can you benefit from it? Read on to find out!
The What
Amazon EKS Anywhere is an open-source offering through which customers can host and operate secure, reliable Kubernetes clusters on-premises. It allows you to stay completely off AWS infrastructure (why, you don't even need an AWS account to get started) while offering a cluster management experience on par with EKS.
EKS Anywhere builds on the strengths of Amazon EKS Distro, the same open-source distribution of Kubernetes that is used by Amazon EKS on the cloud, thus fostering consistency and compatibility between clusters both on AWS as well as on-premises.
The Why
This section covers the motivation for using EKS Anywhere.
To understand better how EKS Anywhere may be more suited to customer needs, we will first need to understand the high-level architecture of EKS clusters. An Amazon EKS cluster consists of two primary components:
The Amazon EKS control plane, consisting of nodes running components such as the Kubernetes API Server, Controller Manager, Scheduler,
etcd
, etc.Worker nodes that are registered with the control plane and run customer workloads.
The control plane is provisioned on AWS infrastructure in an account managed by EKS, while the worker nodes run in customer accounts, thus providing a managed Kubernetes experience on AWS.
However, some customers may have applications that need to run on-premises due to regulatory, latency, and data residency requirements as well as requirements to leverage existing infrastructure investments. With EKS Anywhere, both control plane and application workloads run on the customer infrastructure, thus providing complete flexibility to the cluster administrator. Also, customers can make use of the EKS Connector* to connect EKS Anywhere clusters running on their infrastructure to the EKS console, for a centralized view of their on-premises clusters and workloads along with EKS clusters.
*In public preview
The How
EKS Anywhere currently supports customer-managed vSphere infrastructure provider as the production-grade deployment environment for Kubernetes clusters, with bare-metal support coming in 2022. For local development and testing, it also supports the Docker provider, wherein the control plane and worker nodes are provisioned as Docker containers. The Docker provider is not intended to be used in production environments.
In this section, I shall demonstrate a step-by-step walkthrough of creating an EKS Anywhere cluster with the Docker provider. Fasten your seatbelts for an EKS-iting adventure!
Installation
At its core, EKS Anywhere provides an installable CLI eksctl-anywhere
that allows users to create a fully-functional Kubernetes cluster in a matter of minutes. The CLI is provided as an extension to eksctl
, a command-line tool for creating clusters on Amazon EKS. These two binaries and a running Docker environment are all you need to create an EKS Anywhere cluster.
You can install both eksctl
and eksctl-anywhere
directly using Homebrew on MacOS and Linux. In addition, it is a good idea to install kubectl
for interacting with your cluster post-creation
brew install aws/tap/eks-anywhere
brew install kubectl
Cluster creation
The first step in creating an EKS Anywhere cluster is to generate a cluster config for the desired infrastructure provider. This is a manifest containing the cluster spec that allows you to declaratively manage your EKS Anywhere cluster. Before we proceed, let us give our cluster a suitable name that will be used as a reference for all future operations.
export CLUSTER_NAME=eks-anywhere-test
The following command generates the cluster config for the Docker provider, with default replica counts, networking and external etcd
configurations.
eksctl anywhere generate clusterconfig $CLUSTER_NAME -p docker
Running the above command will generate the following output.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: eks-anywhere-test
spec:
clusterNetwork:
cni: cilium
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: eks-anywhere-test
externalEtcdConfiguration:
count: 1
kubernetesVersion: "1.21"
workerNodeGroupConfigurations:
- count: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: eks-anywhere-test
spec: {}
---
If desired, you may modify the spec as per your requirements. EKS Anywhere supports both stacked and unstacked etcd
topologies, with the latter being the default. If you prefer to use stacked etcd
, you can remove the externalEtcdConfiguration
section from the spec.
For the purpose of this tutorial, we shall use the default values generated by the command. In order to use the config for cluster operations, the cluster config must be stored in a file.
eksctl anywhere generate clusterconfig $CLUSTER_NAME -p docker > $CLUSTER_NAME.yaml
Now for the fun part - actually creating the cluster!
eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
The above command will kick-start the cluster creation and update the progress on each step in the creation workflow. A detailed explanation of the workflow is provided here. Optionally, you can set an appropriate verbosity level (0 through 9) using the -v
flag for more verbose logging and for a deeper understanding of what is going on behind the scenes.
Performing setup and validations
✅ Docker Provider setup is valid
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing storage class on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing AddonManager and GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Writing cluster config file
Deleting bootstrap cluster
? Cluster created!
Woot, we have created our first EKS Anywhere cluster! The whole process should take around 8-15 minutes or so.
The CLI creates a folder with the same name as the cluster and places a kubeconfig file with Admin privileges inside this folder. This kubeconfig file can be used to interact with our EKS Anywhere cluster.
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
Let us look at the pods to verify that they are all running.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
capd-system capd-controller-manager-659dd5f8bc-wj4hl 2/2 Running 0 1m
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-69889cb844-m87x8 2/2 Running 0 1m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-6ddc66fb75-hz4hm 2/2 Running 0 1m
capi-system capi-controller-manager-db59f5789-sjnv5 2/2 Running 0 1m
capi-webhook-system capi-controller-manager-64b8c548db-kwntg 2/2 Running 0 1m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-68b8cc9759-7zczt 2/2 Running 0 1m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7dc88f767d-p7bbk 2/2 Running 0 1m
cert-manager cert-manager-5f6b885b4-8l5f9 1/1 Running 0 2m
cert-manager cert-manager-cainjector-bb6d9bcb5-jr7x2 1/1 Running 0 2m
cert-manager cert-manager-webhook-56cbc8f5b8-47wmg 1/1 Running 0 2m
eksa-system eksa-controller-manager-6769764b45-gw6sp 2/2 Running 0 1m
etcdadm-bootstrap-provider-system etcdadm-bootstrap-provider-controller-manager-54476b7bf9-8fr2k 2/2 Running 0 1m
etcdadm-controller-system etcdadm-controller-controller-manager-d5795556-d9cmz 2/2 Running 0 1m
kube-system cilium-operator-6bf46cc6c6-j5c8v 1/1 Running 0 2m
kube-system cilium-operator-6bf46cc6c6-vsf79 1/1 Running 0 2m
kube-system cilium-q4gg6 1/1 Running 0 2m
kube-system cilium-xgffq 1/1 Running 0 2m
kube-system coredns-7c68f85774-4kvcb 1/1 Running 0 2m
kube-system coredns-7c68f85774-9z9kn 1/1 Running 0 2m
kube-system kube-apiserver-eks-anywhere-test-29qnl 1/1 Running 0 2m
kube-system kube-controller-manager-eks-anywhere-test-29qnl 1/1 Running 0 2m
kube-system kube-proxy-2fx4g 1/1 Running 0 2m
kube-system kube-proxy-r4cc8 1/1 Running 0 2m
kube-system kube-scheduler-eks-anywhere-test-29qnl 1/1 Running 0 2m
Using the following command, we can fetch the container images running on our pods, and verify that the control plane images, i.e., API server, Controller Manager, etc are all vended by EKS Distro.
kubectl get pods -A -o yaml | yq e '.items[] | .spec.containers[] | .image' - | sort -ur
public.ecr.aws/eks-anywhere/brancz/kube-rbac-proxy:v0.8.0-eks-a-1
public.ecr.aws/eks-anywhere/cluster-controller:v0.5.0-eks-a-1
public.ecr.aws/eks-anywhere/jetstack/cert-manager-cainjector:v1.1.0-eks-a-1
public.ecr.aws/eks-anywhere/jetstack/cert-manager-controller:v1.1.0-eks-a-1
public.ecr.aws/eks-anywhere/jetstack/cert-manager-webhook:v1.1.0-eks-a-1
public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/capd-manager:v0.3.23-eks-a-1
public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/cluster-api-controller:v0.3.23-eks-a-1
public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/kubeadm-bootstrap-controller:v0.3.23-eks-a-1
public.ecr.aws/eks-anywhere/kubernetes-sigs/cluster-api/kubeadm-control-plane-controller:v0.3.23-eks-a-1
public.ecr.aws/eks-anywhere/mrajashree/etcdadm-bootstrap-provider:v0.1.0-beta-4.1-eks-a-1
public.ecr.aws/eks-anywhere/mrajashree/etcdadm-controller:v0.1.0-beta-4.1-eks-a-1
public.ecr.aws/eks-distro/coredns/coredns:v1.8.3-eks-1-21-4
public.ecr.aws/eks-distro/kubernetes/kube-apiserver:v1.21.2-eks-1-21-4
public.ecr.aws/eks-distro/kubernetes/kube-controller-manager:v1.21.2-eks-1-21-4
public.ecr.aws/eks-distro/kubernetes/kube-proxy:v1.21.2-eks-1-21-4
public.ecr.aws/eks-distro/kubernetes/kube-scheduler:v1.21.2-eks-1-21-4
public.ecr.aws/isovalent/cilium:v1.9.10-eksa.1
public.ecr.aws/isovalent/operator-generic:v1.9.10-eksa.1
Upon retrieving the nodes, we can see that our cluster has one control plane ("master") node and one worker node as specified in our manifest.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
eks-anywhere-test-29qnl Ready control-plane,master 4m v1.21.2-eks-1-21-4
eks-anywhere-test-md-0-7796db4bdd-4wmd5 Ready <none> 3m v1.21.2-eks-1-21-4
To log onto a node, we can simply run
docker exec -it <node name> bash
Testing
Let us test our EKS Anywhere cluster by deploying a simple Nginx service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-anywhere-nginx-test
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: public.ecr.aws/nginx/nginx:latest
ports:
- containerPort: 8080
We can create the Nginx workload using the following command.
kubectl apply -f eks-anywhere-nginx-test.yaml
This will provision 3 pods for our application in the default
namespace.
```NAME READY STATUS RESTARTS AGE
eks-anywhere-nginx-test-7676d696c8-c5ths 1/1 Running 0 1m
eks-anywhere-nginx-test-7676d696c8-c76lf 1/1 Running 0 1m
eks-anywhere-nginx-test-7676d696c8-m25r5 1/1 Running 0 1m
To test our application, we can use the following command to forward the deployment port to our host machine port 80.
```shell
$ kubectl port-forward deploy/eks-anywhere-nginx-test 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Then, when we navigate to localhost:8080
on the browser, we are greeted by the Nginx welcome page.
Alternatively, we can fetch the contents of the site using curl
.
$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Thus, we have successfully created and tested our EKS Anywhere cluster. If you wish to go one step further, you can deploy the deploy the Kubernetes Dashboard UI for your cluster using the intstructions here.
Cluster deletion
After testing, the cluster can be deleted using the command
eksctl anywhere delete cluster -f $CLUSTER_NAME.yaml
Conclusion
That brings us to the end of this walkthrough. Thank you very much for reading and I hope you will give EKS Anywhere a spin. The complete documentation is available here. If you are interested in contributing, please open an issue or pull request on the EKS Anywhere GitHub repo. Let me know your thoughts in the comments below. If you have more questions, feel free to reach out to me on LinkedIn or Twitter.
This content originally appeared on DEV Community and was authored by Abhay Krishna Arunachalam
Abhay Krishna Arunachalam | Sciencx (2021-09-13T02:00:44+00:00) EKS Anywhere: The What, The Why and The How. Retrieved from https://www.scien.cx/2021/09/13/eks-anywhere-the-what-the-why-and-the-how/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.