Skip to content

Fail to destroy and recreate google_monitoring_uptime_check_config if there is a linked google_monitoring_alert_policy #3133

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Feb 27, 2019 · 4 comments

Comments

@ghost
Copy link

ghost commented Feb 27, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

  • Terraform v0.11.11
  • provider.google v2.0.0
  • provider.google-beta v2.0.0

Affected Resource(s)

  • google_monitoring_uptime_check_config
  • google_monitoring_alert_policy

Terraform Configuration Files

resource "google_monitoring_uptime_check_config" "my_check" {
  display_name = "My Check"
  timeout = "10s"
  period = "60s"

  http_check {
    port = 443
    use_ssl = true
  }

  monitored_resource {
    type = "uptime_url"
    labels = {
      host = "something.url.com"
      project_id = "id"
    }
  }
}

resource "google_monitoring_alert_policy" "alert_policy" {
  display_name = "My Alert"
  combiner = "OR"
  conditions {
    display_name = "Uptime Health Check"
    condition_threshold {
      filter = "metric.type=\"monitoring.googleapis.com/uptime_check/check_passed\" resource.type=\"uptime_url\" metric.label.\"check_id\"=\"${basename(google_monitoring_uptime_check_config.my_check.id)}\""
      duration = "60s"
      comparison = "COMPARISON_GT"
      threshold_value = 1.0
      trigger {
        count = 1
      }
      aggregations {
        alignment_period = "1200s"
        cross_series_reducer = "REDUCE_COUNT_FALSE"
        group_by_fields = ["resource.*"]
        per_series_aligner = "ALIGN_NEXT_OLDER"
      }
    }
  }
}

Debug Output

googleapi: Error 400: Cannot delete check my-check. One or more alerting policies is using it.

Full output here

Expected Behavior

Linked google_monitoring_alert_policy should be destroyed and recreated (because the underlying google_monitoring_uptime_check_config also needs to be destroyed and recreated)

Actual Behavior

TF tries to delete and recreate the uptime check, but fails because the API doesn't allow it to be deleted (because it has an alert linked to it)

Steps to Reproduce

  1. Set up a terraform file linking an alert_policy to an uptime_check (see above).
  2. terraform apply
  3. Amend the uptime_check in a way that will cause it to be destroyed and recreated (e.g. change period from 60s -> 300s)
  4. terraform apply

References

Period destroy-and-recreate behaviour was fixed in #2703, however this is a pretty useless change given that most uptime checks will probably want an alert associated with them! Issue #3132 is also related to the uptime_check resource, where again TF attempts an amend instead of a destroy and recreate. Perhaps worth looking through the Google API to see if there are any other changes that need making too?

@chrisst
Copy link
Contributor

chrisst commented Mar 1, 2019

Unfortunately I think this is another example of the limitations of terraform to be able to link the recreation of one resource as a cause for recreating a second resource. This is in part due to how tolerant google_monitoring_alert_policy is for allowing updates coupled with the opposite behavior for google_monitoring_uptime_check_config.

This is another concrete example of how a solution for hashicorp/terraform#8099 would be very helpful.

Usually we work around this by adding an interpolation on a field in the dependent resource that will also trigger a recreate, but since the only such field in alert policy is project that's not a viable work around right now. I think that this use case is common enough that we may have to bake in an artificial trigger field to force this dependent relation.

@chrisst
Copy link
Contributor

chrisst commented Mar 1, 2019

After some thought I think rather than adding extra fields onto alert policy to force the recreate dependency this particular problem can be solved with the existing terraform create_before_destroy. The following should allow for the in place update to occur:

resource "google_monitoring_uptime_check_config" "my_check" {
  display_name = "My Check"
  timeout = "10s"
  period = "60s"

  http_check {
    port = 443
    use_ssl = true
  }

  lifecycle {
    create_before_destroy = true
  }

@ghost
Copy link
Author

ghost commented Mar 4, 2019

Ah brilliant - that lifecycle change works a treat. It creates the new uptime check, moves the alert policy over to point at it, then destroys the old one. Fab.

@ghost ghost closed this as completed Mar 4, 2019
@ghost ghost removed the waiting-response label Mar 4, 2019
@ghost
Copy link

ghost commented Apr 3, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 3, 2019
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant