Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: enable secure boot #54

Merged
merged 1 commit into from
Mar 15, 2024
Merged

fix: enable secure boot #54

merged 1 commit into from
Mar 15, 2024

Conversation

FrankYang0529
Copy link
Member

@FrankYang0529 FrankYang0529 commented Mar 8, 2024

issue: harvester/harvester#4777

Test paln

Setup

  1. Clone rancher2.
git clone https://github.com/FrankYang0529/terraform-provider-rancher2 -b HARV-4777
  1. Build binary.
CGO_ENABLED=0 go build -ldflags="-w -s -X main.VERSION=test -extldflags -static" -o bin/terraform-provider-rancher2
  1. Move the binary to terraform folder.
mkdir -p ~/.terraform.d/plugins/registry.terraform.io/rancher/rancher2/0.0.0-dev/linux_amd64
cp bin/terraform-provider-rancher2 ~/.terraform.d/plugins/registry.terraform.io/rancher/rancher2/0.0.0-dev/linux_amd64/terraform-provider-rancher2_v0.0.0-dev
  1. Create an image and vlan in harvester.
  2. Create a folder to put guest cluster terraform.
# provider.tf
terraform {
  required_providers {
    rancher2 = {
      source  = "rancher/rancher2"
      version = "0.0.0-dev"
    }
  }
}

# Configure the Rancher2 provider to admin
provider "rancher2" {
  api_url    = "<rancher api url>"
  access_key = "<rancher access key>"
  secret_key = "<rancher secret key>"
  insecure   = true
}

Case 1: enable_efi = true

  1. Put guest cluster terraform to the folder.
# main.tf
# Get imported harvester cluster info
data "rancher2_cluster_v2" "harv" {
  name = "<harveste cluster name in rancher>"
}

# Create a new Cloud Credential for an imported Harvester cluster
resource "rancher2_cloud_credential" "harv-cred" {
  name = "harv-cred"
  harvester_credential_config {
    cluster_id = data.rancher2_cluster_v2.harv.cluster_v1_id
    cluster_type = "imported"
    kubeconfig_content = data.rancher2_cluster_v2.harv.kube_config
  }
}

# Create a new rancher2 machine config v2 using harvester node_driver
resource "rancher2_machine_config_v2" "rke2-machine" {
  generate_name = "rke2-machine"
  harvester_config {
    vm_namespace = "default"
    cpu_count = "2"
    memory_size = "4"
    disk_info = <<EOF
    {
        "disks": [{
            "imageName": "<image-namespace>/<image-name>",
            "size": 20,
            "bootOrder": 1
        }]
    }
    EOF
    network_info = <<EOF
    {
        "interfaces": [{
            "networkName": "<network-namespace>/<network-name>"
        }]
    }
    EOF
    ssh_user = "ubuntu"
    user_data = <<EOF
    package_update: true
    packages:
      - qemu-guest-agent
      - iptables
    runcmd:
      - - systemctl
        - enable
        - '--now'
        - qemu-guest-agent.service
    password: test
    chpasswd:
      expire: false
    ssh_pwauth: true
    EOF
    enable_efi = true
  }
}

resource "rancher2_cluster_v2" "rke2-1" {
  name = "rke2-1"
  kubernetes_version = "v1.26.11+rke2r1"
  rke_config {
    machine_pools {
      name = "pool1"
      cloud_credential_secret_name = rancher2_cloud_credential.harv-cred.id
      control_plane_role = true
      etcd_role = true
      worker_role = true
      quantity = 1
      machine_config {
        kind = rancher2_machine_config_v2.rke2-machine.kind
        name = rancher2_machine_config_v2.rke2-machine.name
      }
    }
  }
}
  1. Run terraform init and terraform apply.
  2. Check whether VM in harvester has efi.
        firmware:
          bootloader:
            efi:
              secureBoot: false
  1. Cleanup.
terraform destroy

Case 2: enable_efi = true & enable_secure_boot = true

  1. Add enable_secure_boot = true to main.tf.
  2. Run terraform init and terraform apply.
  3. Check whether VM in harvester has smm, efi, and secure boot.
    Ref: https://kubevirt.io/user-guide/virtual_machines/virtual_hardware/#biosuefi
        features:
          smm:
            enables: true  
        firmware:
          bootloader:
            efi:
              secureBoot: true
  1. Cleanup.
terraform destroy

Case 3: enable_efi = false & enable_secure_boot = false

  1. Modify enable_efi = false and enable_secure_boot = false in main.tf.
  2. Run terraform init and terraform apply.
  3. Check there is no efi in the VM.
  4. Cleanup.
terraform destroy

Signed-off-by: PoAn Yang <[email protected]>
Copy link
Member

@bk201 bk201 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

Copy link
Contributor

@ibrokethecloud ibrokethecloud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. thanks.

@bk201 bk201 merged commit adc96de into harvester:master Mar 15, 2024
3 checks passed
@FrankYang0529 FrankYang0529 deleted the HARV-4777 branch April 1, 2024 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants