This content originally appeared on DEV Community and was authored by Lukas Gentele
By Elly Obare
This article is the second part of a series focused on Kubernetes multi-cluster. For an introduction and more about the goals and responsibilities of multi-cluster setups, please see part one.
Many teams want to deploy their applications in clusters that are feature-rich, reliable, and accessible from anywhere—tapping into the power of the multi-cluster.
As seen in the first article in this series, it’s paramount that cluster setups involve sound planning, design, and deployment. This brings us to managing the cluster lifecycle.
This article will explore what managing the lifecycle of clusters entails, going into detail on the various tools available for spinning clusters and identifying issues that need to be solved, such as software installation and patching.
Let’s get started!
Managing a Kubernetes cluster lifecycle revolves around design, deployment, operation, and deletion phases. For complete control over provisioned Kubernetes clusters, from creating, upgrading, or maintaining to deleting, you need management in both single cluster and multi-cluster deployment architectures.
The right lifecycle management approach gives you visibility into your clusters and helps you manage your workloads in any environment.
What is Kubernetes Multi-Cluster?
Kubernetes provisions clusters that run and manage our workloads. Depending on the needs of an organization, Kubernetes deployments can be replicated to have the same workloads accessible across multiple nodes and environments.
This concept is called Kubernetes multi-cluster orchestration. It’s simply provisioning your workloads in several Kubernetes clusters (going beyond a single cluster).
A Kubernetes multi-cluster defines deployment strategies to introduce scalability, availability, and isolation for your workloads and environments. A Kubernetes multi-cluster is fully embraced when an organization coordinates the planning, delivery, and management of several Kubernetes environments using appropriate tools and processes.
Why Do You Need a Kubernetes Multi-Cluster?
In simple deployment cases, Kubernetes can spin workloads in a single cluster. However, some cases need advanced deployment models, and for such scenarios, a multi-cluster architecture is suitable and can improve the performance of your workloads.
Simply put, a development team may need a Kubernetes multi-cluster to handle workloads spanning regions, eliminate a cloud blast radius, manage compliance requirements, solve multi-tenancy conflicts, and enforce security around clusters and tenants.
Spinning up Clusters
Spinning Kubernetes clusters without tools or automation can be a headache. You can save yourself from significant challenges arising from the complexities and operational overhead by using tools and managed services. The following are some tools you can use to spin up your Kubernetes clusters.
Terraform
Spinning Kubernetes clusters is an uphill task, especially where several cloud platforms are involved. A cross-cloud tool like Terraform can be immensely helpful here.
As an infrastructure as code tool, Terraform simply provisions Kubernetes clusters using code. It allows you to declare Infrastructure resources you want to create and apply the declarations at the command line level. You can also invoke Terraform commands that will spin those resources for you.
Using Terraform, you can spin clusters on many popular services, including Amazon Web Services (AWS), Azure, and Google Cloud.
Although you could use the native Kubernetes kubectl
tools for full lifecycle management of your resources as described in YAML files, spinning clusters with Terraform offers its own benefits.
As a declarative and low-code tool, Terraform makes application configurations, deployment pipelines, and scaling Kubernetes clusters easy regardless of the platforms involved. Kubernetes—a tool that also supports declarative configurations—can be easily complemented by Terraform when spinning enterprise-level clusters.
Spinning a Kubernetes cluster through Terraform requires a provider to manage the Kubernetes APIs and execute resource configurations. A popular method of provisioning Kubernetes clusters using Terraform is through the Kubernetes-Terraform provider.
Amazon EKS - Cloud Tool
Amazon EKS—the Amazon Elastic Container Service for Kubernetes—is a managed service offering from AWS. Amazon EKS is used to spin, manage, and scale Kubernetes clusters and containerized workloads. Amazon EKS supports the deployment of your Kubernetes infrastructure across multiple AWS Availability Zones.
This tool features automatic detection and replacement of unhealthy Kubernetes control plane nodes. Amazon EKS also takes care of on-demand upgrades and patching of your deployed clusters.
kOps
kOps is an automated tool for provisioning, upgrading, maintaining, and deleting production-grade, highly available Kubernetes clusters. Additionally, kOps can provision necessary cloud infrastructure to work with your clusters.
This tool integrates well with platforms like AWS, GCE, and VMware. kOps features a complete control for the entire Kubernetes cluster lifecycle—from infrastructure provisioning to cluster deletion.
Rancher
Rancher is a container management platform that provides Kubernetes as a service with built-in support for multi-cluster orchestration. Rancher supports the spinning of clusters on various environments like on-premise data centers, cloud, and edge (clusters that run anywhere).
An inbuilt tool like the Rancher Kubernetes Engine can provide developers with great flexibility when spinning up Kubernetes clusters. Rancher features simplified installation, automated operations, full observability/monitoring, enterprise-level support, and integration with other developer tools, such as CI/CD tools.
Additionally, Rancher is a unified multi-cluster platform that addresses operational and security issues around several Kubernetes clusters. Rancher watches Kubernetes clusters through centrally configured security policies for centralized authentication and access control, enterprise-grade security, auditing, backups, and alerts. This way, developers can consistently spin secure clusters anywhere in a matter of minutes.
Loft
Loft is a platform with several toolsets that offer managed self-service solutions for you to spin and scale clusters smoothly. Loft features self-service environment provisioning, secure Kubernetes multi-tenancy, and enterprise-grade access control.
Loft is 100% Kubernetes, so you can control everything via native Kubernetes tools and APIs, facilitating integration with other cloud-native tools. Loft’s advanced control plane operates over your existing Kubernetes clusters to support multi-tenancy, unlike other Kubernetes management platforms. Admins can also use Loft to create namespaces and multiple virtual clusters on-demand for seamless multi-tenancy.
Working With vcluster in Loft
Loft also developed vcluster, now an open-source tool, that allows teams to simplify Kubernetes cluster operations by creating virtual Kubernetes clusters that run inside regular namespaces (lightweight multiple clusters).
A vcluster (or virtual cluster) is a complete Kubernetes cluster that operates inside the namespace of some other physical Kubernetes cluster (host cluster). You can use vclusters, therefore, to spin up new Kubernetes clusters inside an existing and operational cluster.
vcluster is the first Kubernetes distribution that supports the creation of basic virtual Kubernetes clusters that are orchestrated in isolated namespaces and without any need for admin privileges. Additionally, Loft virtual clusters run your tenants with entirely separate control planes, so that any upgrade can be achieved independently.
With vcluster, a higher level of isolation is achieved, enabling teams to provision more customizable Kubernetes environments without the need for starting new physical clusters each time.
Cluster Upgrades and Security Management
Teams that rely heavily on Kubernetes for deployments need to plan for regular upgrades and patches on their environments for comprehensive security fixes.
Running cluster upgrades without due care or proper tools can break more things, and more so when dependent resources are overloaded. Tools like kOPs and Cluster APIs can therefore be used to apply upgrades to your running clusters.
The tools that you install to run your clusters depend entirely on the workloads that your clusters support. How you upgrade a cluster and its tools also depends on how you initially deployed and ran the Kubernetes cluster, that is, whether you’re using a hosted Kubernetes provider or some other means for deployment. Most hosted providers support and handle automatic upgrades, which relieves developers from manual upgrades and patching.
Upgrading a cluster and its toolset follows the approach of upgrading the control plane first, then the nodes in a cluster, followed by upgrading clients such as kubectl
.
Deprovisioning Clusters That Are No Longer Needed
When you deprovision a cluster, its running resources are also deleted. The control plane resources, the node instances, pods, and stored data are all deleted.
Different hosted Kubernetes providers have varying ways of deleting Kubernetes clusters. For instance, GKE supports deletion of clusters from the Google Cloud CLI and Cloud Console. Other tools for spinning Kubernetes clusters such as kOps and Amazon EKS also support the deletion from their CLIs and consoles.
Suppose you have provisioned your clusters with the Google Kubernetes Engine; you can run the following command in the gcloud CLI to deprovision your clusters that are no longer needed:
gcloud container clusters delete CLUSTER_NAME
At this point, you’ve seen the operations around managing a cluster lifecycle, that is, creation, deletion, and upgrading of clusters.
Conclusion
Teams want working with clusters to be as easy as possible. This ease in operating clusters can be ensured by managing the cluster lifecycle. In this article, you learned what’s involved in managing a cluster lifecycle. You’ve seen how clusters are created at scale using various tools. You’ve also seen what cluster upgrades and security patch management involve while trying to maintain the health of your clusters.
The complexity of Kubernetes environments does present challenges, but setting clear goals and objectives for deploying your clusters can help you overcome any obstacles as your organization makes the transition.
Finally, multi-cluster deployments are a good choice for organizations that are building highly distributed systems, with geographic and regulatory control in check to help scale workloads beyond the limits of single clusters. Multi-cluster deployment and management is useful for minimizing exposure of production services, preventing access to sensitive data in environments like development and testing. Organizations are now opting to deploy their more critical workloads on separate multiple clusters from their less critical ones.
This content originally appeared on DEV Community and was authored by Lukas Gentele
Lukas Gentele | Sciencx (2022-04-12T19:52:10+00:00) Kubernetes Multi-Cluster Part 2: Managing the Cluster Lifecycle. Retrieved from https://www.scien.cx/2022/04/12/kubernetes-multi-cluster-part-2-managing-the-cluster-lifecycle/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.