Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Can not update zone_id from non-auto to auto-az #1492

Open
tejas-angelone opened this issue Jun 13, 2024 · 4 comments
Open

[ISSUE] Can not update zone_id from non-auto to auto-az #1492

tejas-angelone opened this issue Jun 13, 2024 · 4 comments
Labels
Bug Something isn't working DABs DABs related issues

Comments

@tejas-angelone
Copy link

tejas-angelone commented Jun 13, 2024

Describe the issue

The zone_id for the job cluster isn't getting updated from the non-auto availability zone to auto AZ. Whereas, the update works for non-auto AZs only. So, once a cluster is created with non-auto AZ you can switch to auto.

Issue started from CLI v0.220.0

Steps to reproduce the behavior

  1. Create a resource config as below and deploy the bundle
job_clusters:
    - job_cluster_key: job_cluster
      new_cluster:
        spark_version: 13.3.x-scala2.12
        node_type_id: i3.xlarge
        aws_attributes:
          zone_id: ap-south-1a
        autoscale:
            min_workers: 1
            max_workers: 4
  1. Update the zone_id: auto and deploy the bundle

Expected Behavior

zone_id must update to auto

Actual Behavior

The zone_id doesn't get updated from ap-south-1a (any non-auto AZ) to auto

OS and CLI version

Databricks CLI v0.220.0

Is this a regression?

Works in Databricks CLI v0.219.0

@tejas-angelone tejas-angelone added the DABs DABs related issues label Jun 13, 2024
@tejas-angelone tejas-angelone changed the title Can not update zone_id from non-auto to auto-az [ISSUE] Can not update zone_id from non-auto to auto-az Jun 13, 2024
@andrewnester andrewnester added the Bug Something isn't working label Jun 17, 2024
@dmarcus30
Copy link

Having same issue here

@ribugent
Copy link

ribugent commented Oct 9, 2024

In our case we noticed this regression today, we could update the definitions and then update manually on the web, but in our case the number of job clusters is insane and we'll lose too much time.

Unfortunately downgrading to 0.219 in our case doesn't seem to work due to the usage of complex variables.

EDIT: I ended up modifying one by one during 20 minutes all job clusters after adjusting the configuration files.

@rpinzon
Copy link

rpinzon commented Jan 10, 2025

Having same issue here.

I also tried using a policy with the auto zone_id without specifying a value in the bundle, but it ended up creating a cluster with the first non-auto zone_id available.

@FernandoOLI
Copy link

I found an alternative.

Before deploying, run this command:
databricks bundle destruction

Deleting all jobs and their permissions and in the AWS attribute settings, make zone_id: auto explicit.

When running the deploy command again
databricks bundle deployment

The clusters were created with thezone_id auto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working DABs DABs related issues
Projects
None yet
Development

No branches or pull requests

6 participants