Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to use new disk definition format in variable e.g. module #907

Open
hagaram opened this issue Jan 22, 2024 · 33 comments
Open
Labels
resource/qemu Issue or PR related to Qemu resource type/question Issue needs no code to be fixed, only a description on how to fix it yourself

Comments

@hagaram
Copy link

hagaram commented Jan 22, 2024

Hi, I have terraform proxmox module based on this provider and I have trouble to modify the module definition of a VM resource to work correctly with new disks layout --> #892

How should I correctly define disks block in variable to be able to pass it verbatim, or at least provide such structure, to be able to parse/create the corrects disks block in resource definition inside of the module?

@hagaram
Copy link
Author

hagaram commented Jan 23, 2024

When using resource directly its ok, but when trying to pass it from variable to module and create the disks block - new structure is horrible to work with. It would be nice to include it in documentation. Yes, its a little out of scope of a provider, but I believe it would really help some people, including me.

@Korney95
Copy link

Hi, Today I tried to write a terraform module for the new version of the provider, but, unfortunately, I could not bypass all types of disks through dynamic, since it does not know how to work with dynamic variables. Tell me, are there any solutions to be able to remove this lamp into variables? This cannot currently be used to create terraform modules.

@clincha
Copy link

clincha commented Jan 29, 2024

You can take a look at the vm_qemu.md file in that merge request you linked (#892). The diff shows the new ways to configure disks against the old. Is that the kind of thing you're looking for?

@hagaram
Copy link
Author

hagaram commented Jan 30, 2024

@clincha Unfortunatelly, thats not the solution I, and I think neither @Korney95 are looking for.

When you use resource only, as shown in the example you posted - it works as intended, without issues.
But if you want to wrap the provider into a module - for whatever reason (I personally create DNS entries and entries in Netbox along with the VM). It is absolutely impossible to pass or somehow construct the disks block form some kind of variable.

disks is a block, not a variable to which you would assign another variable from module instance. Thats what makes this hard to do.
What Im trying to do is something like this (non working example, one of many):

Define dynamic disks block

  dynamic "disks" {
    for_each = var.vm_disk
    iterator = item
    content {
      dynamic "ide" {
        for_each = item.value.ide[*]
        content {
          disk {
            # ... argument assignments from ide.value
          }
        }
      }
      dynamic "sata" {
        for_each = item.value.sata[*]
        content {
          disk {
            # ... argument assignments from sata.value
          }
        }
      }
      dynamic "scsi" {
        for_each = item.value.scsi[*]
        content {
          disk {
            # ... argument assignments from scsi.value
          }
        }
      }
      dynamic "virtio" {
        for_each = item.value.virtio[*]
        content {
          disk {
            # ... argument assignments from virtio.value
          }
        }
      }
    }
  }

Define variable:

variable "vm_disk" {
type = map(object({
  ide = optional(object({
    # (whatever attributes are needed for ide disks)
  }))
  sata = optional(object({
    # (whatever attributes are needed for sata disks)
  }))
  scsi = optional(object({
    # (whatever attributes are needed for scsi disks)
  }))
  virtio = optional(object({
    # (whatever attributes are needed for virtio disks)
  }))
}))
}

And then do use the variable in the module:

vm_disks =

{
# some object
}

Which would then be "parsed" to correct disks block. There is currently no other way to pass it to the resource in the module and its imposible for me to do it correctly. I've spend quite a lot of hours rying to make itwork somehow.

@Korney95
Copy link

Korney95 commented Feb 1, 2024

@hagaram did something happen? Unfortunately, I don't have working examples either. I've tried a lot of things

@Courtlane
Copy link

Courtlane commented Feb 5, 2024

Same problem.
Unfortunately, I didn't make it either to create a module with this new nested disks block design.
In older versions of the provider there was a nice way to use a dynamic disk block because disk could be specified multiple times and you could pass multiple disks in a variable as a map like this:

TF Module

variables.tf:

variable "disks" {
  type        = map(any)
  default     = {}
}

main.tf:

resource "proxmox_vm_qemu" "proxmox_vm" {
....
  dynamic "disk" {
    for_each = var.disks
    content {
      size    = disk.value.size
      type    = disk.value.type
      storage = disk.value.storage
      backup  = disk.value.backup
      cache   = disk.value.cache
    }
  } 
} 

Module Input

module "test-vm" {
  source = "git::https://....."

  disks = {
    "disk1" = {
      size         = "20G",
      type         = "virtio",
      storage      = "datastore_name",
      backup       = 1,
      cache        = "directsync",
    }
    "disk2" = {
      size         = "30G"
      type         = "virtio"
      storage      = "vdatastore_name"
      backup       = 1
      cache        = "directsync"
    }
  }
}

@comminutus
Copy link

I have the same problem. I tried to upgrade my code to use the new disks block but my disks are dynamic specified through a config variable. Having to specify disk1, disk2 etc won't work. I'll have to use the old version of this package until this is fixed.

@Tinyblargon
Copy link
Collaborator

I thought #794 had some insight on how to do this in the comments.

@comminutus
Copy link

@Tinyblargon Could you point me to where exactly? From my understanding, the overhaul of the disks block doesn't support dynamic disks based on your comment here: #794 (comment)

@Tinyblargon
Copy link
Collaborator

Tinyblargon commented Feb 9, 2024

@comminutus it doesn't support dynamic blocks in the traditional sense, but this #794 (comment) explains how you could do it when you use it in a module.

@gbrackley
Copy link

Thanks for everyone's efforts to progress the Proxmox provider. I'm keen to use the newer v3 release. I've been using the provider to deploy Flatcar Linux VMs with Butane/Ignition configurations.

I can see the provider can be used directly in a Terraform script, except I (and others) seem to be using the provider in a Terraform module. I'm thinking the design has been driven by the underlying QEMU/Proxmox Go API, rather than a combination of this and typical uses of the provider. It would be great if there was a test case/example of this (I was trying to get an example of my module deploying and have become stuck).

IMHO the new disk configuration is too complex. Looking at the code base, there is a large and deep schema definition for the disks parameter. That said - the design is highly orthogonal. I tried to get ChatGPT to write a definition for a module variable definition given the schema, but it all got too big and repetitive.

I know there are design constraints with Proxmox (and QEMU), but it would be great if the disks were a separate resource to the VM. Provisioning disks shouldn't be so coupled to compute resources. I would really like to be able to have disks that have a lifecycle that is independent to the VM - so I can mark them as 'prevent_destroy'.

I'm stuck providing an interface to my module to allow the usage of the Proxmox provider. How are people getting this working today? Has anyone written a huge definition for a disks variable?

@jvalskis
Copy link

jvalskis commented Feb 11, 2024

it doesn't support dynamic blocks in the traditional sense, but this #794 (comment) explains how you could do it when you use it in a module.

It's technically possible, but you have to code every single possible block with a for_each and a filter, that filters to just that one specific value. So in reality - it can't really be made generic without a massive amount of code duplication in the module. I'm not sure what the driving force was to structure it in such a way, but this use-case was certainly not accounted for.

A friendlier way would have been to flatten it out to an extent, for example instead of disks>scsi>scsi[0-N] blocks making it just one block disks with a flat list of configurations where the type and index is an attribute.

I'm stuck providing an interface to my module to allow the usage of the Proxmox provider. How are people getting this working today? Has anyone written a huge definition for a disks variable?

@gbrackley I just hacked it by copy pasting the configuration to the extent that I'm using it. It's ugly but it works... Let's hope this change gets reverted.

@dxrayz
Copy link

dxrayz commented Feb 12, 2024

Hi. I too ran into this problem when I decided to upgrade my home lab to version 8, and with it terraform provider. Spent several hours experimenting with dynamic block. But I couldn't think of a better solution than adding an if condition to the block. It forces to use code repetitions, looks not elegant, shame to show such a thing=). But it works. I will show only an example with 4 disks.

  disks {
    virtio {
      virtio0 {
        disk {
          storage = "${var.datastore_name["${var.main_disk.datastore}"]}"
          size    = "${var.main_disk_size}G"
        }
      }
      dynamic "virtio1" {
        for_each = { for i, disk in var.disks: i => disk if i == 0 }
        content {
          disk {
            storage = var.datastore_name["${virtio1.value["datastore"]}"]
            size    = "${virtio1.value["size_gb"]}G"
          }
        }
      }
      dynamic "virtio2" {
        for_each = { for i, disk in var.disks: i => disk if i == 1 }
        content {
          disk {
            storage = var.datastore_name["${virtio2.value["datastore"]}"]
            size    = "${virtio2.value["size_gb"]}G"
          }
        }
      }
      dynamic "virtio3" {
        for_each = { for i, disk in var.disks: i => disk if i == 2 }
        content {
          disk {
            storage = var.datastore_name["${virtio3.value["datastore"]}"]
            size    = "${virtio3.value["size_gb"]}G"
          }
        }
      }
      dynamic "virtio4" {
        for_each = { for i, disk in var.disks: i => disk if i == 3 }
        content {
          disk {
            storage = var.datastore_name["${virtio4.value["datastore"]}"]
            size    = "${virtio4.value["size_gb"]}G"
          }
        }
      }
  }
  }

That is, in the module you need to describe all 15 potentially used disks, or as many as you plan to use in the VM.

@comminutus
Copy link

yuck, I'd much rather stick with the older version with much simpler code.

@thdonatello
Copy link

I tried to create a few VMs and I wanted have vars like:

vm_configs = {
  group1 = {
    instances = 2
    cores = 1
    ....
    disks = {
      disk1 = {...}
      disk2 = {...}
    }
  }
  group2 = {
    instances = 4
    cores = 2
    ....
    disks = {
      disk1 = {...}
      disk2 = {...}
    }
  }

}

and it's not possible with new struct because of disks.

@myplixa
Copy link

myplixa commented Mar 25, 2024

I have spent 4 days solving this problem and have not come to a normal variant. I can't use disk creation as before without writing a large piece of code, which kills flexibility when using the module. Maybe to return the work with disks as it was before?
The current implementation is very inconvenient for a large number of people who write modules for this provider.
Also, since the new release with these changes, I have not found a single updated module on the net.

@myplixa
Copy link

myplixa commented Mar 25, 2024

@Tinyblargon As I understand it, you were the initiator of this innovation of working with disks, judging by "Pull requests". And judging by your description in "Bio", then you can suggest the right solution to this problem, which will help a lot of people.
#969

@sach3000
Copy link

Any update guys ?

@Tinyblargon
Copy link
Collaborator

@sach3000 a base implementation has been done but it's waiting on more feedback as we don't want to have a breaking change again in the future. #986

@sach3000
Copy link

@sach3000 a base implementation has been done but it's waiting on more feedback as we don't want to have a breaking change again in the future. #986

Ok. Thanks.
Can you show an example of how to use disk descriptions through a module?

@maksimsamt
Copy link
Contributor

My two cents, I do not agree with this statement:

Maybe to return the work with disks as it was before?

Please don't do this!
The latest disk block solution is more flexible, dynamic and comprehensive than the previous ones. It can handle disks, cds, cloud-init drives, etc, possible switch between interfaces ide/sata/scsi a.o., and all actions within one block!

And as proof of concept, below is my example of how dynamic disk blocks can be handled with the latest version of the provider.

Implemented and tested on:

  • Proxmox VE 8.2.2
  • Terraform v1.8.4
  • terraform-provider-proxmox v3.0.1-rc2

Terraform code examples with snippets:

  • variables.tf

    ...
    variable "proxmox_vm_qemu" {
      type = object({
        disks = optional(object({
          scsi = optional(object({
            # disk0 (system disk)
            scsi0 = optional(object({
              disk = optional(object({
                backup     = optional(bool, true)
                cache      = optional(string, "")
                emulatessd = optional(bool, false)
                format     = optional(string, "raw")
                iothread   = optional(bool, true)
                replicate  = optional(bool, true)
                size       = optional(string, "10G")
                storage    = optional(string, "local-lvm")
              }), {})
            }), {})
            # disk1 (optional)
            scsi1 = optional(object({
              disk = optional(object({
                backup     = optional(bool)
                cache      = optional(string)
                emulatessd = optional(bool)
                format     = optional(string)
                iothread   = optional(bool)
                replicate  = optional(bool)
                size       = optional(string)
                storage    = optional(string)
              }), {})
            }), {})
            # disk2 (optional)
            scsi2 = optional(object({
              disk = optional(object({
                backup     = optional(bool)
                cache      = optional(string)
                emulatessd = optional(bool)
                format     = optional(string)
                iothread   = optional(bool)
                replicate  = optional(bool)
                size       = optional(string)
                storage    = optional(string)
              }), {})
            }), {})
            # init-cloud drive (optional)
            scsi10 = optional(object({
              cloudinit = optional(object({
                storage = optional(string)
              }), {})
            }), {})
          }), {})
        }), {})
        # efi disk (optional)
        efidisk = optional(object({
          efitype = optional(string)
          storage = optional(string)
        }), {})
      })
    }
    ...
  • terraform.tfvars

    ...
    proxmox_vm_qemu = {
      disks = {
        scsi = {
          # disk0 (system disk)
          scsi0 = {
            disk = {
              backup     = null
              cache      = null
              emulatessd = null
              format     = null
              iothread   = null
              replicate  = null
              size       = null
              storage    = null
            }
          }
          # disk1 (optional)
          scsi1 = {
            disk = {
              backup     = null
              cache      = null
              emulatessd = null
              format     = null
              iothread   = null
              replicate  = null
              size       = null
              storage    = null
            }
          }
          # disk2 (optional)
          scsi2 = {
            disk = {
              backup     = null
              cache      = null
              emulatessd = null
              format     = null
              iothread   = null
              replicate  = null
              size       = null
              storage    = null
            }
          }
          # init-cloud drive (optional)
          scsi10 = {
            cloudinit = {
              storage = null
            }
          }
        }
      }
      # efi disk (optional)
      efidisk = {
        efitype = null
        storage = null
      }
    }
    ...
  • main.tf

    ...
    resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
    ...
      disks {
        scsi {
          # disk0 (system disk)
          dynamic "scsi0" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size != null ? [0] : []
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size != null ? [0] : []
                content {
                  backup     = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.backup
                  cache      = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.cache
                  emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.emulatessd
                  format     = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.format
                  iothread   = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.iothread
                  replicate  = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.replicate
                  size       = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size
                  storage    = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.storage
                }
              }
            }
          }
          # disk1 (optional)
          dynamic "scsi1" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size != null ? [0] : []
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size != null ? [0] : []
                content {
                  backup     = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.backup
                  cache      = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.cache
                  emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.emulatessd
                  format     = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.format
                  iothread   = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.iothread
                  replicate  = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.replicate
                  size       = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size
                  storage    = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.storage
                }
              }
            }
          }
          # disk2 (optional)
          dynamic "scsi2" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size != null ? [0] : []
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size != null ? [0] : []
                content {
                  backup     = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.backup
                  cache      = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.cache
                  emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.emulatessd
                  format     = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.format
                  iothread   = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.iothread
                  replicate  = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.replicate
                  size       = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size
                  storage    = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.storage
                }
              }
            }
          }
          # init-cloud drive (optional)
          dynamic "scsi10" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage != null ? [0] : []
            content {
              dynamic "cloudinit" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage != null ? [0] : []
                content {
                  storage = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage
                }
              }
            }
          }
        }
      }
    
      # efi disk (optional)
      dynamic "efidisk" {
        for_each = var.proxmox_vm_qemu.efidisk.storage != null ? [0] : []
        content {
          efitype = var.proxmox_vm_qemu.efidisk.efitype
          storage = var.proxmox_vm_qemu.efidisk.storage
        }
      }
    ...
    }
    ...

    This example is for:

    • 3x data disks (you can expand for more if need), 1x is system disk scsi0 and created by default, 2x disks are optional
    • Cloud-init drive is optional
    • Efi disk is optional

    So, if you want create additional 2x data disks, you have to modify only terraform.tfvars, for example:

    ...
    proxmox_vm_qemu = {
      disks = {
      ...
          # disk1 (optional)
          scsi1 = {
            disk = {
              backup     = true
              cache      = ""
              emulatessd = false
              format     = "raw"
              iothread   = true
              replicate  = true
              size       = "1G"
              storage    = "local-lvm"
            }
          }
          # disk2 (optional)
          scsi2 = {
            disk = {
              backup     = true
              cache      = ""
              emulatessd = false
              format     = "raw"
              iothread   = true
              replicate  = true
              size       = "1G"
              storage    = "local-lvm"
            }
          }
      ...
      }
    ...
    }
    ...

    For adding Cloud-int drive and Efi disk:

    ...
    proxmox_vm_qemu = {
      disks = {
        scsi = {
          ...
          # init-cloud drive (optional)
          scsi10 = {
            cloudinit = {
              storage = "local-lvm"
            }
          }
        }
      }
      # efi disk (optional)
      efidisk = {
        efitype = "4m"
        storage = "local-lvm"
      }
    }
    ...

    To delete/deattach these optional disks return parameters values to null.

Hope these examples helps you achieve your goals.

@mathieuHa
Copy link

Hey @maksimsamt,

I ended up doing a module using dynamics and conditionals, it works indead but it add (i believe) a lot of duplications in the module.
Instead of defining one disk variable we have to define all of them (scsi0, scsc1, virtio1..).
However even if it's not the place, I wanted to say, I found that the new version works really well and is pretty fast. Good job.

@elg0ch0
Copy link

elg0ch0 commented May 29, 2024

@maksimsamt , would you mind to share your packer configuration? (I'm stuck in my deployment and I suspect that Iḿ missing something in packer)

@maksimsamt
Copy link
Contributor

@maksimsamt , would you mind to share your packer configuration? (I'm stuck in my deployment and I suspect that Iḿ missing something in packer)

Unfortunately, I don’t think this is the right place to share information about the Packer code/configuration. This is a completely different topic and requires a separate discussion, probably in Hashicorp Packer discussion hub, not here.

@maksimsamt
Copy link
Contributor

A slightly different approach than #907 (comment), to cover dynamic functionality of for_each:

  • variables.tf

    ...
    variable "proxmox_vm_qemu" {
      type = object({
        # disks (required)
        disks = object({
          # scsi disks (required)
          scsi = object({
            # disk0 (required)
            scsi0 = object({
              disk = list(object({
                backup     = bool
                cache      = string
                emulatessd = bool
                format     = string
                iothread   = bool
                replicate  = bool
                size       = string
                storage    = string
              }))
            })
            # disk1 (optional)
            scsi1 = optional(object({
              disk = optional(list(object({
                backup     = optional(bool)
                cache      = optional(string)
                emulatessd = optional(bool)
                format     = optional(string)
                iothread   = optional(bool)
                replicate  = optional(bool)
                size       = optional(string)
                storage    = optional(string)
              })), [])
            }), {})
            # disk2 (optional)
            scsi2 = optional(object({
              disk = optional(list(object({
                backup     = optional(bool)
                cache      = optional(string)
                emulatessd = optional(bool)
                format     = optional(string)
                iothread   = optional(bool)
                replicate  = optional(bool)
                size       = optional(string)
                storage    = optional(string)
              })), [])
            # init-cloud drive (optional)
            scsi10 = optional(object({
              cloudinit = optional(list(object({
                storage = optional(string)
              })), [])
            }), {})
          }), {})
        }), {})
      })
    }
    ...
  • terraform.tfvars

    ...
    proxmox_vm_qemu = {
      # disks (required)
      disks = {
        # scsi (disks: 1xrequired, 2xoptional; cloudinit drive: 1xoptional)
        scsi = {
          # disk0 (required)
          scsi0 = {
            disk = [{
              backup     = true
              cache      = ""
              emulatessd = false
              format     = "raw"
              iothread   = true
              replicate  = true
              size       = "20G"
              storage    = "local-lvm"
            }]
          }
          # disk1 (optional)
          scsi1 = {
            disk = []
          }
          # disk2 (optional)
          scsi2 = {
            disk = []
          }
          # init-cloud drive (optional)
          scsi10 = {
            cloudinit = []
        }
      }
    }
    ...
  • main.tf

    ...
    resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
    ...
      # disks (required)
      disks {
        # scsi disks (required)
        scsi {
          # disk0 (required)
          dynamic "scsi0" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk[*]
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk[*]
                iterator = disk_scsi0
                content {
                  backup     = disk_scsi0.value.backup
                  cache      = disk_scsi0.value.cache
                  emulatessd = disk_scsi0.value.emulatessd
                  format     = disk_scsi0.value.format
                  iothread   = disk_scsi0.value.iothread
                  replicate  = disk_scsi0.value.replicate
                  size       = disk_scsi0.value.size
                  storage    = disk_scsi0.value.storage
                }
              }
            }
          }
          # disk1 (optional)
          dynamic "scsi1" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk[*]
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk[*]
                iterator = disk_scsi1
                content {
                  backup     = disk_scsi1.value.backup
                  cache      = disk_scsi1.value.cache
                  emulatessd = disk_scsi1.value.emulatessd
                  format     = disk_scsi1.value.format
                  iothread   = disk_scsi1.value.iothread
                  replicate  = disk_scsi1.value.replicate
                  size       = disk_scsi1.value.size
                  storage    = disk_scsi1.value.storage
                }
              }
            }
          }
          # disk2 (optional)
          dynamic "scsi2" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk[*]
            content {
              dynamic "disk" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk[*]
                iterator = disk_scsi2
                content {
                  backup     = disk_scsi2.value.backup
                  cache      = disk_scsi2.value.cache
                  emulatessd = disk_scsi2.value.emulatessd
                  format     = disk_scsi2.value.format
                  iothread   = disk_scsi2.value.iothread
                  replicate  = disk_scsi2.value.replicate
                  size       = disk_scsi2.value.size
                  storage    = disk_scsi2.value.storage
                }
              }
            }
          }
          # init-cloud drive (optional)
          dynamic "scsi10" {
            for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit[*]
            content {
              dynamic "cloudinit" {
                for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit[*]
                content {
                  storage = cloudinit.value.storage
                }
              }
            }
          }
        }
      }
    ...
    }
    ...

@arsensimonyanpicsart
Copy link

up

@Tinyblargon
Copy link
Collaborator

@arsensimonyanpicsart
#986 has the information regarding this.

@lsampaioweb
Copy link

Hi @Tinyblargon and @maksimsamt,

I've been using the Proxmox Terraform provider for a while (I have several github projects using it), and I appreciate the work that goes into maintaining it. However, the recent changes to how disks are defined have significantly complicated my workflow.

Previously, I could define disks in a much simpler and more concise way:

disk {
  type    = "scsi"
  size    = "18G"
  storage = "vg_nvme"
  ssd     = 1
}

Or when using dynamic blocks:

dynamic "disk" {
  for_each = var.disks
  content {
    type    = disk.value.type
    size    = disk.value.size
    storage = disk.value.storage
    ssd     = disk.value.ssd
  }
}

Now, the new structure requires something like this:

disks {
  scsi {
    scsi0 {
      disk {
        size       = 18
        storage    = "vg_nvme"
        emulatessd = true
      }
    }
  }
}

When using dynamic, as shown in #907 (comment), the code becomes even longer and more cumbersome, which not only increases complexity but also introduces unnecessary verbosity.

These changes feel like a step backward in terms of usability. The new approach is more verbose, less intuitive, and makes managing multiple disks a real challenge. It almost feels like the provider isn't being tested in real-world scenarios where simplicity and maintainability are key.

I'd like to request a reconsideration of this change or, at the very least, some discussion around making the provider more dynamic-friendly again. The previous method of defining disks was much more straightforward and aligned better with Terraform's philosophy of simplicity and readability.

Thank you for your hard work, and I hope we can find a middle ground that satisfies both usability and the new features.

Best regards,
Luciano Sampaio

@Tinyblargon
Copy link
Collaborator

@lsampaioweb #986 addresses this and how we are planning to change it. A lot of the code is done for it. Hopefully, i can get it ready before the end of the month.

Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

Copy link

This issue was closed because it has been inactive for 5 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 23, 2024
@hagaram
Copy link
Author

hagaram commented Dec 11, 2024

Reopen

@Tinyblargon
Copy link
Collaborator

@hagaram for the use of disks in modules, we reintroduced the disk setting. Use disks for direct definitions and disk for modules.

@Tinyblargon Tinyblargon reopened this Dec 11, 2024
@Tinyblargon Tinyblargon added type/question Issue needs no code to be fixed, only a description on how to fix it yourself resource/qemu Issue or PR related to Qemu resource and removed issue/stale labels Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
resource/qemu Issue or PR related to Qemu resource type/question Issue needs no code to be fixed, only a description on how to fix it yourself
Projects
None yet
Development

No branches or pull requests