-
Notifications
You must be signed in to change notification settings - Fork 36
Breaking change in 0.25.0 - all instances that use cloud_config must be replaced due to destroy_cloud_config_vdi_after_boot #267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@InputObject2 it was not intended to be a breaking change. It was an oversight during the testing the change and I apologize for that. I need to research the best way to apply the default value, but my current thinking is that the plan diff should be customized to ignore this condition or the state needs to be explicitly migrated to apply the default value. |
It seems it should be possible to write an acceptance test for the type of state migration I explained above (hashicorp/terraform-plugin-sdk#253 (comment)). I have a code change that I believe should address the problem, but I need to further investigate the acceptance test portion of it. |
@InputObject2 can you provide detailed steps for reproducing the issue (terraform version, provider version before, terraform code for the VM)? I tried to reproduce it through this acceptance test, which simulates creating a VM with v0.24.2 of the provider followed by a terraform acc test
I also performed the same test manually and could not reproduce the issue. Manual test output
|
Hi! Thanks for diving into this so quick! This is a bit of a long post but here we go. current: terraform 1.5.7, provider v.0.25.0 Steps1- Terraform setup with v0.24.2Here's some terraform code for it: Terraform full codeterraform {
required_providers {
xenorchestra = {
source = "terra-farm/xenorchestra"
version = "0.24.2"
}
}
}
provider "xenorchestra" {}
variable "instance_os_disk_xoa_sr_uuid" {
default = "b1280ccc-2d73-6d34-8285-78ad87a5c4d1"
}
variable "instance_xoa_template_uuid" {
default = "c1a888dc-8cc7-9444-f995-391a92a9af07"
}
variable "instance_xoa_network" {
default = "7c2a2531-1298-495c-edd2-fad16e7c2226"
}
variable "public_ssh_key" {
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCvUQ/2WaIYub7Ns8psnOPoYaaArZcoRrfTtDDHXruSZfOnbPrvfFInuIdI11AxwodzKILv8oeUOqmFSpmGOBZn4Hy1An2eM39WG8025JKNE9UainAfKlpX8HgMeSyqdT7X50HI7LsgUYvrbW4tPnLt9Dh+Wsgn9+ErQsE0Hj8IExZv9O/YDLJ6Lin3nD775ncXvHbI1nFfcTmJ/kW9NXvyP+AJYVrbP1hxC72BNQfbJWvhYymyDAhEhzFudCjz420ajqrWwsNzJIAV4P3gVWHUNVntllqJtf60EoQhKTAPZxl3Pm+OgneG8zLMC4PkSeXG4nw26kmusH7CLxd/BX3DrlXLpdvL7RMbDuwl/b183HoKsCfx9kAID6KVB1qCLRw/E5g/F6EeIhK4n2Tr/82PIi3Iw3N93PyfLAjn9HmAgQnXW/uQCqwR2+s5uPflysOTRExxIEIaZsWSaTrgte1+33dIQMpYK7YgpYNncuQGYGZH1cxYYbs0Y8UheQ5i0mgSzsTQWY+VPnZRgAGZ2Wmz+1Ndr8AaHvzL81DLGl8355wfXiuK06eTqRzAaepIUZGAanVwllCm4XFVVzeIPIFcnfcVTnsCJ0xcDFQxdUlsrRGdD04fQu1ioX1lhT0P03VA1thUtKRkmo+thT2bOwZV+eZeGzaHxQe21WQmKU/WDQ== cloud-user@localhost"
}
data "local_file" "cloud_network_config" {
filename = "templates/cloud_network_config.yaml"
}
resource "xenorchestra_cloud_config" "test" {
name = "repro-test"
template = <<EOF
#cloud-config
hostname: "repro-test"
users:
- name: cloud-user
gecos: cloud-user
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ${var.public_ssh_key}
packages:
- ca_root_nss
EOF
}
resource "xenorchestra_vm" "instance" {
name_label = "test-repro"
cloud_config = xenorchestra_cloud_config.test.template
cloud_network_config = data.local_file.cloud_network_config.content
template = var.instance_xoa_template_uuid
auto_poweron = true
network {
network_id = var.instance_xoa_network
}
disk {
sr_id = var.instance_os_disk_xoa_sr_uuid
name_label = "test-repro"
size = 10 * 1024 * 1024 * 1024 # GB to B
}
cpus = 2
memory_max = 2 * 1024 * 1024 * 1024 # GB to B
wait_for_ip = false
} 2- Terraform initTerraform initterraform init
Initializing the backend...
Initializing provider plugins...
- Finding terra-farm/xenorchestra versions matching "0.24.2"...
- Finding latest version of hashicorp/local...
- Installing terra-farm/xenorchestra v0.24.2...
- Installed terra-farm/xenorchestra v0.24.2 (self-signed, key ID 6A6E2EACF91F3875)
- Installing hashicorp/local v2.4.0...
- Installed hashicorp/local v2.4.0 (signed by HashiCorp) 3- Terraform planTerraform plan in v0.24.2terraform plan
data.local_file.cloud_network_config: Reading...
data.local_file.cloud_network_config: Read complete after 0s [id=a0f68fd5efd854626b32e9305902189d3f626b1f]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# xenorchestra_cloud_config.test will be created
+ resource "xenorchestra_cloud_config" "test" {
+ id = (known after apply)
+ name = "repro-test"
+ template = <<-EOT
#cloud-config
hostname: "repro-test"
users:
- name: cloud-user
gecos: cloud-user
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCvUQ/2WaIYub7Ns8psnOPoYaaArZcoRrfTtDDHXruSZfOnbPrvfFInuIdI11AxwodzKILv8oeUOqmFSpmGOBZn4Hy1An2eM39WG8025JKNE9UainAfKlpX8HgMeSyqdT7X50HI7LsgUYvrbW4tPnLt9Dh+Wsgn9+ErQsE0Hj8IExZv9O/YDLJ6Lin3nD775ncXvHbI1nFfcTmJ/kW9NXvyP+AJYVrbP1hxC72BNQfbJWvhYymyDAhEhzFudCjz420ajqrWwsNzJIAV4P3gVWHUNVntllqJtf60EoQhKTAPZxl3Pm+OgneG8zLMC4PkSeXG4nw26kmusH7CLxd/BX3DrlXLpdvL7RMbDuwl/b183HoKsCfx9kAID6KVB1qCLRw/E5g/F6EeIhK4n2Tr/82PIi3Iw3N93PyfLAjn9HmAgQnXW/uQCqwR2+s5uPflysOTRExxIEIaZsWSaTrgte1+33dIQMpYK7YgpYNncuQGYGZH1cxYYbs0Y8UheQ5i0mgSzsTQWY+VPnZRgAGZ2Wmz+1Ndr8AaHvzL81DLGl8355wfXiuK06eTqRzAaepIUZGAanVwllCm4XFVVzeIPIFcnfcVTnsCJ0xcDFQxdUlsrRGdD04fQu1ioX1lhT0P03VA1thUtKRkmo+thT2bOwZV+eZeGzaHxQe21WQmKU/WDQ== cloud-user@localhost
packages:
- ca_root_nss
EOT
}
# xenorchestra_vm.instance will be created
+ resource "xenorchestra_vm" "instance" {
+ auto_poweron = true
+ cloud_config = <<-EOT
#cloud-config
hostname: "repro-test"
users:
- name: cloud-user
gecos: cloud-user
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCvUQ/2WaIYub7Ns8psnOPoYaaArZcoRrfTtDDHXruSZfOnbPrvfFInuIdI11AxwodzKILv8oeUOqmFSpmGOBZn4Hy1An2eM39WG8025JKNE9UainAfKlpX8HgMeSyqdT7X50HI7LsgUYvrbW4tPnLt9Dh+Wsgn9+ErQsE0Hj8IExZv9O/YDLJ6Lin3nD775ncXvHbI1nFfcTmJ/kW9NXvyP+AJYVrbP1hxC72BNQfbJWvhYymyDAhEhzFudCjz420ajqrWwsNzJIAV4P3gVWHUNVntllqJtf60EoQhKTAPZxl3Pm+OgneG8zLMC4PkSeXG4nw26kmusH7CLxd/BX3DrlXLpdvL7RMbDuwl/b183HoKsCfx9kAID6KVB1qCLRw/E5g/F6EeIhK4n2Tr/82PIi3Iw3N93PyfLAjn9HmAgQnXW/uQCqwR2+s5uPflysOTRExxIEIaZsWSaTrgte1+33dIQMpYK7YgpYNncuQGYGZH1cxYYbs0Y8UheQ5i0mgSzsTQWY+VPnZRgAGZ2Wmz+1Ndr8AaHvzL81DLGl8355wfXiuK06eTqRzAaepIUZGAanVwllCm4XFVVzeIPIFcnfcVTnsCJ0xcDFQxdUlsrRGdD04fQu1ioX1lhT0P03VA1thUtKRkmo+thT2bOwZV+eZeGzaHxQe21WQmKU/WDQ== cloud-user@localhost
packages:
- ca_root_nss
EOT
+ cloud_network_config = <<-EOT
network:
version: 1
config:
- type: physical
name: xn0
subnets:
- type: dhcp
EOT
+ core_os = false
+ cpu_cap = 0
+ cpu_weight = 0
+ cpus = 2
+ exp_nested_hvm = false
+ hvm_boot_firmware = "bios"
+ id = (known after apply)
+ ipv4_addresses = (known after apply)
+ ipv6_addresses = (known after apply)
+ memory_max = 2147483648
+ name_label = "test-repro"
+ power_state = (known after apply)
+ start_delay = 0
+ template = "c1a888dc-8cc7-9444-f995-391a92a9af07"
+ vga = "std"
+ videoram = 8
+ wait_for_ip = false
+ disk {
+ name_label = "test-repro"
+ position = (known after apply)
+ size = 10737418240
+ sr_id = "b1280ccc-2d73-6d34-8285-78ad87a5c4d1"
+ vbd_id = (known after apply)
+ vdi_id = (known after apply)
}
+ network {
+ device = (known after apply)
+ ipv4_addresses = (known after apply)
+ ipv6_addresses = (known after apply)
+ mac_address = (known after apply)
+ network_id = "7c2a2531-1298-495c-edd2-fad16e7c2226"
}
}
Plan: 2 to add, 0 to change, 0 to destroy. 4- Terraform applyTerraform apply in v0.24.2
4- Set version in the provider and terraform init -upgradeTerraform provider 0.25.0terraform {
required_providers {
xenorchestra = {
source = "terra-farm/xenorchestra"
version = "0.25.0"
}
}
} Then upgrade the provider Terraform init -upgradeterraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/local...
- Finding terra-farm/xenorchestra versions matching "0.25.0"...
- Using previously-installed hashicorp/local v2.4.0
- Installing terra-farm/xenorchestra v0.25.0...
- Installed terra-farm/xenorchestra v0.25.0 (self-signed, key ID 6A6E2EACF91F3875) 5- Terraform plan againTerraform plan in v0.25.0terraform plan
data.local_file.cloud_network_config: Reading...
data.local_file.cloud_network_config: Read complete after 0s [id=a0f68fd5efd854626b32e9305902189d3f626b1f]
xenorchestra_cloud_config.test: Refreshing state... [id=d7b437bc-969c-4bc1-83d0-65b1cd01ba8e]
xenorchestra_vm.instance: Refreshing state... [id=33b5ddb6-c126-7de9-cd58-329d29da5c48]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. It seems to be hit or miss as to why Terraform decides to see the added property, but changing litteraly anything (in my case I set name_description = "test") and terraform will decide it now sees the parameter and destroys the vm. resource "xenorchestra_vm" "instance" {
[...]
name_description = "test" Then if we plan again, now it's detected, and even though name_description is not something that usually needs instance destroyed... Terraform plan in v0.25.0 with any other changesterraform plan
data.local_file.cloud_network_config: Reading...
data.local_file.cloud_network_config: Read complete after 0s [id=a0f68fd5efd854626b32e9305902189d3f626b1f]
xenorchestra_cloud_config.test: Refreshing state... [id=d7b437bc-969c-4bc1-83d0-65b1cd01ba8e]
xenorchestra_vm.instance: Refreshing state... [id=33b5ddb6-c126-7de9-cd58-329d29da5c48]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# xenorchestra_vm.instance must be replaced
-/+ resource "xenorchestra_vm" "instance" {
- blocked_operations = [] -> null
+ destroy_cloud_config_vdi_after_boot = false # forces replacement
~ id = "33b5ddb6-c126-7de9-cd58-329d29da5c48" -> (known after apply)
~ ipv4_addresses = [] -> (known after apply)
~ ipv6_addresses = [] -> (known after apply)
+ name_description = "test"
~ power_state = "Running" -> (known after apply)
- tags = [] -> null
# (16 unchanged attributes hidden)
~ disk {
- attached = true -> null
~ position = "0" -> (known after apply)
~ vbd_id = "952f2645-e014-1975-0a2e-b0ceac2facda" -> (known after apply)
~ vdi_id = "ce5d24d5-eb8e-491a-b033-b03308211006" -> (known after apply)
# (3 unchanged attributes hidden)
}
~ network {
- attached = true -> null
~ device = "0" -> (known after apply)
~ ipv4_addresses = [] -> (known after apply)
~ ipv6_addresses = [] -> (known after apply)
~ mac_address = "4a:38:dc:3a:a2:d5" -> (known after apply)
# (1 unchanged attribute hidden)
}
}
Plan: 1 to add, 0 to change, 1 to destroy. |
@InputObject2 thanks for the detailed information. I'm able to reproduce it in my acceptance test now 👍 |
Unfortunately the state migration testing capability requires terraform-plugin-sdk v2.23.0 or later and a newer go version. Since it's been a while since the sdk has been upgraded, I'm going to upgrade it to the latest version (v2.29.0) as a prerequisite. I don't believe there will be any complications with upgrading, but our nightly CI that was recently put into place hard codes the given go version. So in addition to the sdk and go upgrade, I need to enhance CI to support parameterizing the Go version before making that change. This should pave the way for testing the provider against different terraform versions and opentofu as well. |
I decided to hold off on the build work that I mentioned above. I was able to get the terraform-provider-sdk and the go upgrade tested without too much trouble. I'll try to get the fix for this merged in this week and make a release around the same time frame. |
This will be fixed in v0.25.1, which will be released shortly. |
The addition of
destroy_cloud_config_vdi_after_boot
in #255 adds a new field to all existing VM instances.Since it forces recreation on change (going from null to true/false in the case of existing VM's), terraform will destroy any existing virtual machines that doesn't specifically ignore_changes on it.
For example, when nothing is specified:
Anyway the quick and easy solution is to add a lifecycle ignore_changes to the VM instances, but this seems like something that could impact some users.
So I guess my question is: was this an intentional effect in 0.25.0? If yes, that seems like something that should be in the release notes.
The text was updated successfully, but these errors were encountered: