Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PowerVS CI: unable to connect to api-server on the first control plane machine #2002

Open
Amulyam24 opened this issue Oct 11, 2024 · 0 comments
Labels
area/provider/ibmcloud Issues or PRs related to ibmcloud provider kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@Amulyam24
Copy link
Contributor

Amulyam24 commented Oct 11, 2024

/kind bug
/area provider/ibmcloud

Workload cluster creation Creating a highly available control-plane cluster Should create a cluster with 3 control-plane nodes and 1 worker node
/home/prow/go/src/github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/test/e2e/e2e_test.go:137
  STEP: Creating a namespace for hosting the "create-workload-cluster" test spec @ 10/10/24 21:45:44.154
  INFO: Creating namespace create-workload-cluster-hsot1d
  INFO: Creating event watcher for namespace "create-workload-cluster-hsot1d"
  STEP: Creating a high available cluster @ 10/10/24 21:45:44.164
  INFO: Creating the workload cluster with name "capibm-e2e-n0ewfx" using the "powervs-md-remediation" template (Kubernetes v1.29.3, 3 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster capibm-e2e-n0ewfx --infrastructure (default) --kubernetes-version v1.29.3 --control-plane-machine-count 3 --worker-machine-count 1 --flavor powervs-md-remediation
  INFO: Creating the workload cluster with name "capibm-e2e-n0ewfx" from the provided yaml
  INFO: Applying the cluster template yaml of cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx
  INFO: Waiting for the cluster infrastructure of cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 10/10/24 21:45:44.543
  INFO: Waiting for control plane of cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx to be initialized
  INFO: Waiting for the first control plane machine managed by create-workload-cluster-hsot1d/capibm-e2e-n0ewfx-control-plane to be provisioned
  STEP: Waiting for one control plane node to exist @ 10/10/24 21:46:24.572
  INFO: Installing a CNI plugin to the workload cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx
  [FAILED] in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/clusterctl_helpers.go:451 @ 10/10/24 21:56:05.611
  STEP: Dumping logs from the "capibm-e2e-n0ewfx" workload cluster @ 10/10/24 21:56:05.611
Unable to get logs for workload Cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx: log collector is nil.
  STEP: Dumping all the Cluster API resources in the "create-workload-cluster-hsot1d" namespace @ 10/10/24 21:56:05.611
  STEP: Deleting all clusters in the create-workload-cluster-hsot1d namespace @ 10/10/24 21:56:05.801
  STEP: Deleting cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx @ 10/10/24 21:56:05.805
  INFO: Waiting for the Cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx to be deleted
  STEP: Waiting for cluster create-workload-cluster-hsot1d/capibm-e2e-n0ewfx to be deleted @ 10/10/24 21:56:05.818
  STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec @ 10/10/24 21:56:25.828
  INFO: Deleting namespace create-workload-cluster-hsot1d
• [FAILED] [641.690 seconds]

Workload cluster creation Creating a highly available control-plane cluster [It] Should create a cluster with 3 control-plane nodes and 1 worker node
/home/prow/go/src/github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/test/e2e/e2e_test.go:137
   [FAILED] Unexpected error:
      <errors.aggregate | len:1, cap:1>: 
      failed to get API group resources: unable to retrieve the complete list of server APIs: policy/v1: Get "https://130.198.103.219:6443/apis/policy/v1": dial tcp 130.198.103.219:6443: i/o timeout
      [
          <*fmt.wrapError | 0xc0004b6060>{
              msg: "failed to get API group resources: unable to retrieve the complete list of server APIs: policy/v1: Get \"https://130.198.103.219:6443/apis/policy/v1\": dial tcp 130.198.103.219:6443: i/o timeout",
              err: <*apiutil.ErrResourceDiscoveryFailed | 0xc0000885a0>{
                  {Group: "policy", Version: "v1"}: <*url.Error | 0xc0012c0030>{
                      Op: "Get",
                      URL: "https://130.198.103.219:6443/apis/policy/v1",
                      Err: <*net.OpError | 0xc002598000>{
                          Op: "dial",
                          Net: "tcp",
                          Source: nil,
                          Addr: <*net.TCPAddr | 0xc000b9eab0>{
                              IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 130, 198, 103, 219],
                              Port: 6443,
                              Zone: "",
                          },
                          Err: <*poll.DeadlineExceededError | 0x4432820>{},
                      },
                  },
              },
          },
      ]
  occurred
  In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/clusterctl/clusterctl_helpers.go:451 @ 10/10/24 21:56:05.611

We have been hitting this issue quite frequently in the PowerVS CI for the past couple of weeks

Job -https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/periodic-capi-provider-ibmcloud-e2e-powervs/1844483197404975104

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. area/provider/ibmcloud Issues or PRs related to ibmcloud provider labels Oct 11, 2024
@mkumatag mkumatag added this to the Next milestone Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/ibmcloud Issues or PRs related to ibmcloud provider kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants