Terragrunt definitions related to Containers sandbox
Required :
-
AWS configuration with profile dev
cd dev-account/eu-central-1/dev/vpc
terragrunt apply
The vpc and subnet ids are obtained from the VPC module
cd dev-account/eu-central-1/dev/eks
terragrunt apply
Update local kube configuration (in ~/.kube), using the generated kubeconfig file. See the kubeconfig_filename output parameter.
Check cluster state with kubectl cluster-info and kubectl get pods -A
Configuration for the tiller service.
cd dev-account/eu-central-1/dev/tiller
terragrunt apply
Deploy the k8s dashboard.
cd dev-account/eu-central-1/dev/kubernetes-dashboard
terragrunt apply
Following best practice , the dashboard is not exposed and the recommended way to access it is to
kubectl proxy
aws eks get-token --cluster-name dev01 | jq -r '.status.token'
Alternatively, create a service account as described here
Order is important
Create the 2 security groups that will be used by the ALB for incoming traffic
- security-group-http
- security-group-https
Create the nginx-ingress itself and a kubernetes ingress for path based mapping
- nginx-ingress
The ALB itself has multiple dependencies :
- security-group-http and https to associate them with the load balancer, for incoming traffic
- vpc for the vpc id and public subnets to use
- eks for the worker security group id, to be able to connect to the workers
- nginx-ingress for the node port to use
The last step is to associate target group created by the ALB with the workers ASG
- autoscaling-attachment
cd dev-account/eu-central-1/dev/module1
terragrunt apply
Get the url of the ALB with
cd alb
terragrunt output dns_name
Then try http://dns_name/container1
The IAM identity who creates the cluster becomes superadmin, although it's not visible in the aws-auth config map.
It's possible to to grant access to the cluster to other (IAM) users or roles. Role is easier to manage.
The cluster-access module creates 2 roles and necessary policies to assume them.
cd dev-account/eu-central-1/dev/cluster-access
terragrunt apply
Execute terragrunt destroy on each component, in the reverse order used to create them.
cd module1
terragrunt destroy
cd ../autoscaling-attachment
terragrunt destroy
...
I sometimes have an issue when destroying the VPC -> delete from the AWS console and terragrunt refresh
cd dev-account/eu-central-1/dev/hello-world
terragrunt apply --terragrunt-source ../../../../../containers-infrastructure-modules//hello-world`
Outputs:
asg_name = tf-asg-20190926070830396400000002
asg_security_group_id = sg-09b7388c5f832dbd8
elb_dns_name = hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com
elb_security_group_id = sg-076c571aea4af57cd
url = http://hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com:80
09:18:31 nlejeune hello-world ±|master ✗|→ curl http://hello-world-dev-1612745828.eu-central-1.elb.amazonaws.com:80
Hello, World
It would be nice to have the module version (git tag, i.e. v0.1.0) defined once per environment and use it in the source string of each component. I did not manage to do it.