Terraform v15.0 with AWS (EKS deployment) just

Terraform v15 was released on April 14th.

On this post I will use the following resources:

· Provision an EKS Cluster (AWS)
· Terraform v15.0
· Terraform Registry
· Pre-Commit
· Terraform Pre-commit
· Terraform-docs
· Tflint
· Tfsec

This is based in…


This content originally appeared on DEV Community and was authored by cosckoya

Terraform v15 was released on April 14th.

On this post I will use the following resources:

· Provision an EKS Cluster (AWS)
· Terraform v15.0
· Terraform Registry
· Pre-Commit
· Terraform Pre-commit
· Terraform-docs
· Tflint
· Tfsec

This is based in the Provision an EKS Cluster (AWS) where it tweak it a little bit setting up some variables, breaking the .tf files and setting up some new providers and modules. Also I will test a bunch of features, like terraform-docs, tflint and tfsec with a pre-commit git-hook. Let's start!

Before start we should check the needed AWS resources for this task:

1 x Amazon VPC
6 x Amazon Subnet (3 x Public + 3 x Private)
3 x Amazon EC2
1 x Amazon EKS
1 x Kubernetes AWS-Auth policy

Because some AWS modules are not available yet, I will create my own modules in my deployment.

First of all I will generate my project folder:

CMD> mdkir terraform-aws
CMD> cd terraform-aws

Then I will create some base terraform files:

CMD> touch {main,outputs,variables,versions}.tf
CMD> ls -l
total 16
-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:01 main.tf
-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:01 outputs.tf
-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:16 variables.tf
-rw-rw-r-- 1 cosckoya cosckoya 0 abr 17 22:17 versions.tf

And enable the pre-commit in the project:

CMD> echo 'repos:
- repo: git://github.com/antonbabenko/pre-commit-terraform
  rev: master
  hooks:
  - id: terraform_fmt
  - id: terraform_validate
  - id: terraform_docs
  - id: terraform_docs_without_aggregate_type_defaults
  - id: terraform_tflint
    args:
    - 'args=--enable-rule=terraform_documented_variables'
  - id: terraform_tfsec
- repo: https://github.com/pre-commit/pre-commit-hooks
  rev: master
  hooks:
  - id: check-merge-conflict
  - id: end-of-file-fixer' > .pre-commit-config.yaml

CMD> pre-commit install
pre-commit installed at .git/hooks/pre-commit

Let's start setting up "versions.tf" file, that will contain our relative provider versions:

# Providers version
# Ref. https://www.terraform.io/docs/configuration/providers.html

terraform {
  required_version = "~>0.15"
  required_providers {
    # Base Providers
    random = {
      source  = "hashicorp/random"
      version = "3.1.0"
    }
    null = {
      source  = "hashicorp/null"
      version = "3.1.0"
    }
    local = {
      source = "hashicorp/local"
      version = "2.1.0"
    }
    template = {
      source = "hashicorp/template"
      version = "2.2.0"
    }
    # AWS Provider
    aws = {
      source  = "hashicorp/aws"
      version = "3.37.0"
    }
    # Kubernetes Provider
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.1.0"
    }    
  }
}

Then I manage to set some variables for the project:

# Common
variable "project" {
  default     = "cosckoya"
  description = "Project name"
}

variable "environment" {
  default     = "laboratory"
  description = "Environment name"
}

# Amazon
variable "region" {
  default     = "us-east-1"
  description = "AWS region"
}

variable "vpc_cidr" {
  type        = string
  default     = "10.0.0.0/16"
  description = "AWS VPC CIDR"
}

variable "public_subnets_cidr" {
  type        = list(any)
  default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  description = "AWS Public Subnets"
}

variable "private_subnets_cidr" {
  type        = list(any)
  default     = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
  description = "AWS Private Subnets"
}

This is my main.tf file. Here I changed the "locals" to set some tags as a "tomap(..)" function, update d the modules to the last version, also updated the Kubernetes version to 1.19... just to test and have fun.

provider "aws" {
  profile    = "default"
  region     = var.region
  access_key = "AKIA.."
  secret_key = "<SECRET-KEY-HERE"
}

data "aws_availability_zones" "available" {}

locals {
  cluster_name = "${var.project}-${var.environment}-eks"
  tags = tomap({"Environment" = var.environment, "project" = var.project})
}

resource "random_string" "suffix" {
  length  = 8
  special = false
}

## Amazon Networking
module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "2.78.0"

  name                 = "${var.project}-${var.environment}-vpc"
  cidr                 = var.vpc_cidr
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = var.public_subnets_cidr
  public_subnets       = var.private_subnets_cidr
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }
}

## Amazon EKS
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.19"
  subnets         = module.vpc.private_subnets

  tags = local.tags

  vpc_id = module.vpc.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity          = 1
    },
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

## Amazon Security Groups
resource "aws_security_group" "worker_group_mgmt_one" {
  name_prefix = "worker_group_mgmt_one"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "10.0.0.0/8",
    ]
  }
}

resource "aws_security_group" "worker_group_mgmt_two" {
  name_prefix = "worker_group_mgmt_two"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "192.168.0.0/16",
    ]
  }
}

resource "aws_security_group" "all_worker_mgmt" {
  name_prefix = "all_worker_management"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = [
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16",
    ]
  }
}

These outputs are the "default" outputs in the samples.

output "cluster_id" {
  description = "EKS cluster ID."
  value       = module.eks.cluster_id
}

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "kubectl_config" {
  description = "kubectl config as generated by the module."
  value       = module.eks.kubeconfig
}

output "config_map_aws_auth" {
  description = "A kubernetes configuration to authenticate to this EKS cluster."
  value       = module.eks.config_map_aws_auth
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = local.cluster_name
}

Time to have fun now. Let's play with this:

· Initialize the project

CMD> terraform init

In here we should test the pre-commit rules that we setted up, take note of every Tfsec error about the security compliance. Try to resolve each or comment them with this docs

Also create a README.md file with the following lines:

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

This will generate some information relative to the project.
Run pre-commit with

CMD> pre-commit run -a

Check README.file it should look like this:

Requirements

Name Version
terraform ~>0.15
aws 3.37.0
kubernetes 2.1.0
local 2.1.0
null 3.1.0
random 3.1.0
template 2.2.0

Providers

Name Version
aws 3.37.0
random 3.1.0

Modules

Name Source Version
eks terraform-aws-modules/eks/aws
vpc terraform-aws-modules/vpc/aws 2.78.0

Resources

Name Type
[aws_security_group.all_worker_mgmt](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources/secur
ity_group) resource
[aws_security_group.worker_group_mgmt_one](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources
/security_group) resource
[aws_security_group.worker_group_mgmt_two](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/resources
/security_group) resource
random_string.suffix resourc
e
[aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/data-sources/avai
lability_zones) data source
aws_eks_cluster.cluster
data source
[aws_eks_cluster_auth.cluster](https://registry.terraform.io/providers/hashicorp/aws/3.37.0/docs/data-sources/eks_clus
ter_auth) data source

Inputs

Name Description Type Default Required
environment Environment name string "laboratory" n
o
private_subnets_cidr AWS Private Subne
ts list(any)
[
"10.0.4.0/24",
"10.0.5.0/24",
"10.0.6.0/24"
]
no
project Project name string "cosckoya" no
public_subnets_cidr AWS Public Subnets
list(any)
[
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
no
region AWS region string "us-east-1" no
vpc_cidr AWS VPC CIDR string "10.0.0.0/16" no

Outputs

Name Description
cluster_endpoint Endpoint for EKS control plan
e.
cluster_id EKS cluster ID.
cluster_name Kubernetes Cluster Name
cluster_security_group_id
Security group ids attached to the cluster control plane.
config_map_aws_auth A kubernetes con
figuration to authenticate to this EKS cluster.
kubectl_config kubectl config as generated by the
module.
region AWS region

[...] Let's continue with the Terraform project. Now it's time to deploy!
· Plan the project

CMD> terraform plan

· Deploy the project

CMD> terraform apply 

· Connect to the cluster and enjoy!

CMD> aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

Running some basic commands we can see that the cluster is up and running:

CMD> kubectl cluster-info

Kubernetes control plane is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.com
CoreDNS is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

CMD> kubectl get nodes

NAME                         STATUS   ROLES    AGE   VERSION
ip-10-0-2-138.ec2.internal   Ready    <none>   26m   v1.18.9-eks-d1db3c
ip-10-0-2-88.ec2.internal    Ready    <none>   26m   v1.18.9-eks-d1db3c
ip-10-0-3-68.ec2.internal    Ready    <none>   26m   v1.18.9-eks-d1db3c

And this it. Enjoy!

Ps. As you could see this is so similar to the AWS Terraform Learn page. Little tweaks to test some changes between versions.

I'm a very big fan of @antonbabenko work. I recommend everyone to follow him.


This content originally appeared on DEV Community and was authored by cosckoya


Print Share Comment Cite Upload Translate Updates
APA

cosckoya | Sciencx (2021-04-17T22:14:50+00:00) Terraform v15.0 with AWS (EKS deployment) just. Retrieved from https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/

MLA
" » Terraform v15.0 with AWS (EKS deployment) just." cosckoya | Sciencx - Saturday April 17, 2021, https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/
HARVARD
cosckoya | Sciencx Saturday April 17, 2021 » Terraform v15.0 with AWS (EKS deployment) just., viewed ,<https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/>
VANCOUVER
cosckoya | Sciencx - » Terraform v15.0 with AWS (EKS deployment) just. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/
CHICAGO
" » Terraform v15.0 with AWS (EKS deployment) just." cosckoya | Sciencx - Accessed . https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/
IEEE
" » Terraform v15.0 with AWS (EKS deployment) just." cosckoya | Sciencx [Online]. Available: https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/. [Accessed: ]
rf:citation
» Terraform v15.0 with AWS (EKS deployment) just | cosckoya | Sciencx | https://www.scien.cx/2021/04/17/terraform-v15-0-with-aws-eks-deployment-just/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.