Skip to content

lejeunen/containers-infrastructure-environments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 

Repository files navigation

containers-infrastructure-environments

Terragrunt definitions related to Containers sandbox

Required :

  1. AWS configuration with profile dev

  2. Install Terraform.

  3. Install Terragrunt.

To build the complete stack

VPC

cd dev-account/eu-central-1/dev/vpc
terragrunt apply 

EKS

The vpc and subnet ids are obtained from the VPC module

cd dev-account/eu-central-1/dev/eks
terragrunt apply 

Update local kube configuration (in ~/.kube), using the generated kubeconfig file. See the kubeconfig_filename output parameter.

Check cluster state with kubectl cluster-info and kubectl get pods -A

Infra

Tiller

Configuration for the tiller service.

cd dev-account/eu-central-1/dev/tiller
terragrunt apply 

Kubernetes dashboard

Deploy the k8s dashboard.

cd dev-account/eu-central-1/dev/kubernetes-dashboard
terragrunt apply 

Following best practice , the dashboard is not exposed and the recommended way to access it is to

kubectl proxy
aws eks get-token --cluster-name dev01 | jq -r '.status.token'

Alternatively, create a service account as described here

The open http://localhost:8001/api/v1/namespaces/kube-system/services/https:dashboard-kubernetes-dashboard:/proxy/#!/login

Ingress

Order is important

Http/Https security groups

Create the 2 security groups that will be used by the ALB for incoming traffic

  • security-group-http
  • security-group-https

nginx-ingress

Create the nginx-ingress itself and a kubernetes ingress for path based mapping

  • nginx-ingress

ALB

The ALB itself has multiple dependencies :

  • security-group-http and https to associate them with the load balancer, for incoming traffic
  • vpc for the vpc id and public subnets to use
  • eks for the worker security group id, to be able to connect to the workers
  • nginx-ingress for the node port to use

Autoscaling attachment

The last step is to associate target group created by the ALB with the workers ASG

  • autoscaling-attachment

App modules

cd dev-account/eu-central-1/dev/module1
terragrunt apply 

Test it

Get the url of the ALB with

cd alb
terragrunt output dns_name

Then try http://dns_name/container1

Controlling access to cluster

The IAM identity who creates the cluster becomes superadmin, although it's not visible in the aws-auth config map.

It's possible to to grant access to the cluster to other (IAM) users or roles. Role is easier to manage.

The cluster-access module creates 2 roles and necessary policies to assume them.

cd dev-account/eu-central-1/dev/cluster-access
terragrunt apply 

Cleaning up

Execute terragrunt destroy on each component, in the reverse order used to create them.

cd module1
terragrunt destroy
cd ../autoscaling-attachment
terragrunt destroy
...

I sometimes have an issue when destroying the VPC -> delete from the AWS console and terragrunt refresh

To execute with local reference, override the source

cd dev-account/eu-central-1/dev/hello-world
terragrunt apply --terragrunt-source ../../../../../containers-infrastructure-modules//hello-world`

Outputs:

asg_name = tf-asg-20190926070830396400000002
asg_security_group_id = sg-09b7388c5f832dbd8
elb_dns_name = hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com
elb_security_group_id = sg-076c571aea4af57cd
url = http://hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com:80

09:18:31 nlejeune hello-world ±|master ✗|→ curl http://hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com:80
Hello, World

Things to improve

It would be nice to have the module version (git tag, i.e. v0.1.0) defined once per environment and use it in the source string of each component. I did not manage to do it.

About

Terragrunt definitions related to [Containers sandbox](https://github.com/lejeunen/containers)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages