This example shows how to run a highly-available HashiCorp Vault cluster on Google Compute Engine.
-
Install Terraform locally or in Cloud Shell.
-
Install Vault locally or in Cloud Shell. You only need to install the
vault
binary - you do not need to start a Vault server locally or configure anything. -
Install
gcloud
for your platform. -
Authenticate the local SDK:
$ gcloud auth application-default login
-
Create a new project or use an existing project. Save the ID for use
$ export GOOGLE_CLOUD_PROJECT="my-project-id"
-
Enable the Compute Engine API (Terraform will enable other required ones):
$ gcloud services enable --project "${GOOGLE_CLOUD_PROJECT}" \ compute.googleapis.com
-
Create a
terraform.tfvars
file in the current working directory with your configuration data:project_id = "..."
-
Download required providers:
$ terraform init
-
Plan the changes:
$ terraform plan
-
Assuming no errors, apply:
$ terraform apply
After about 5 minutes, you will have a fully-provisioned Vault cluster. Note that Terraform will return before the instances are finished provisioning. Vault is installed and configured via a startup script.
-
Configure your local Vault binary to communicate with the Vault server:
$ export VAULT_ADDR="$(terraform output vault_addr)" $ export VAULT_CACERT="$(pwd)/ca.crt"
-
Verify Vault is available:
$ vault status
If you see an error or "i/o timeout" or "connection refused", the Vault servers may not have finished provisioning. Wait a few minutes and try again.
-
Initialize the Vault cluster, generating the initial root token and unseal keys:
$ vault operator init \ -recovery-shares 5 \ -recovery-threshold 3
The Vault servers will automatically unseal using the Google Cloud KMS key created earlier. The recovery shares are to be given to operators to unseal the Vault nodes in case Cloud KMS is unavailable in a disaster recovery. They can also be used to generate a new root token. Distribute these keys to trusted people on your team (like people who will be on-call and responsible for maintaining Vault).
The output will look like this:
Recovery Key 1: 2EWrT/YVlYE54EwvKaH3JzOGmq8AVJJkVFQDni8MYC+T Recovery Key 2: 6WCNGKN+dU43APJuGEVvIG6bAHA6tsth5ZR8/bJWi60/ Recovery Key 3: XC1vSb/GfH35zTK4UkAR7okJWaRjnGrP75aQX0xByKfV Recovery Key 4: ZSvu2hWWmd4ECEIHj/FShxxCw7Wd2KbkLRsDm30f2tu3 Recovery Key 5: T4VBvwRv0pkQLeTC/98JJ+Rj/Zn75bLfmAaFLDQihL9Y Initial Root Token: s.kn11NdBhLig2VJ0botgrwq9u
Save this initial root token and do not clear your history. You will need this token to continue the tutorial.
-
Verify Vault is initialized:
$ vault operator init -status
The command will exit successfully if Vault is initialized.
-
Verify Vault is unsealed:
$ vault status
The command will include "Sealed: false".
-
Login with that initial root token:
$ vault login Token (will be hidden): (paste token here)
-
Configure Vault to send its audit logs to Stackdriver
$ vault audit enable file file_path=/var/log/vault/audit.log
Audit logs will now appear in Stackdriver for all requests and responses to Vault. Note the path
/var/log/vault/audit.log
refers to a path on the Vault node itself. This path is not configurable.
- Create GCP service accounts
- Use Cloud KMS in Vault
- Auth to Vault with service accounts
- GCS storage backend
- Spanner storage backend
-
Destroy the infrastructure:
$ terraform destroy
Note: Cloud KMS keys cannot be destroyed. If you destroy and try to re-create it, you will need to change the names of the Cloud KMS keys or the subsequent
terraform apply
will fail with a "resource already exists" error. -
Unset Vault configuration variables:
$ unset VAULT_ADDR VAULT_CACERT
Name | Description | Type | Default | Required |
---|---|---|---|---|
allow_public_egress | Whether to create a NAT for external egress. If false, you must also specify an http_proxy to download required executables including Vault, Fluentd and Stackdriver | bool |
true |
no |
kms_crypto_key | Name of the GCP KMS crypto key | string |
"vault-init" |
no |
kms_keyring | Name of the GCP KMS keyring | string |
"vault" |
no |
load_balancing_scheme | e.g. [INTERNAL|EXTERNAL]. Scheme of the load balancer | string |
"EXTERNAL" |
no |
project_id | Project ID in which to deploy | string |
n/a | yes |
region | Region in which to deploy | string |
"us-east4" |
no |
Name | Description |
---|---|
vault_addr | n/a |