-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm: pod: ceph-csi-rbd-provisioner
: CrashLoopBackOff; container: csi-snapshotter
: flag provided but not defined: -enable-volume-group-snapshots
#5180
Comments
ceph-csi-rbd-provisioner
: CrashLoopBackOff; container: csi-snapshotter
: flag provided but not defined: -enable-volume-group-snapshots
@tw-yshuang what version of helm chart are you using? and also what is the csi-snapshotter version |
- helm version
version.BuildInfo{Version:"v3.16.2", GitCommit:"13654a52f7c70a143b1dd51416d633e1071faffb", GitTreeState:"clean", GoVersion:"go1.22.7"}
- csi-snapshotter version
registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0 |
sorry i looking for cephcsi helm chart version you are using |
I try the same method on the current |
you should not have problem with devel or release 3.13 branch because we dont set |
In the latest testing environment, the chart version is |
Describe the bug
ceph-csi-rbd-provisioner encounters a
CrashLoopBackOff
issue when using the custom values.yaml. The values.yaml only modifies the parameter:csiConfig
, while all other settings remain at their default values.Environment details
quay.io/cephcsi/cephcsi:canary
v3.16.2
Linux 6.8.0-49-generic
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) :krbd
v.1.32.0
18.2.2
Steps to reproduce
Steps to reproduce the behavior:
Setup details:
i. Using containerized or host installation method to set a Ceph-Cluster, e.g. mon, osd, etc., related services
ii. Build a K8s Cluster
iii. Download the ceph-csi repo, and edit the
csiConfig
incharts/ceph-csi-rbd/values.yaml
:Deployment to trigger the issue
execute:
helm install --namespace "default" "ceph-csi-rbd" ceph-csi/ceph-csi-rbd -f values.yaml
See error
i. Check the pod:
kubectl get pod # ========================== output ========================== NAME READY STATUS RESTARTS AGE ceph-csi-rbd-nodeplugin-8nwl7 3/3 Running 0 5s ceph-csi-rbd-nodeplugin-dwkz8 3/3 Running 0 5s ceph-csi-rbd-nodeplugin-hz5vp 3/3 Running 0 5s ceph-csi-rbd-nodeplugin-pj2v2 3/3 Running 0 5s ceph-csi-rbd-provisioner-85996b84cb-f24cx 6/7 CrashLoopBackOff 1 (3s ago) 5s ceph-csi-rbd-provisioner-85996b84cb-xqldq 6/7 CrashLoopBackOff 1 (3s ago) 5s ceph-csi-rbd-provisioner-85996b84cb-zsxxr 6/7 CrashLoopBackOff 1 (3s ago) 5s
ii. The describe of the
ceph-csi-rbd-provisioner
:Actual results
The
ceph-csi-rbd-provisioner
pods will keep restarting and become theCrashLoopBackOff
status.The Static-PVC method is not affected. It can normally work.
Only the
snapshot
function can not enable.Expected behavior
The Snapshotter related parameters are using the default setting, so I expect that the snapshotter can successfully created and work.
Logs
If the issue is in snapshot creation and deletion please attach complete logs
of below containers.
provisioner pod.
kubectl logs pods/ceph-csi-rbd-provisioner-xxxxxxxx csi-snapshotter # ========================== output ========================== flag provided but not defined: -enable-volume-group-snapshots Usage of /csi-snapshotter: -add_dir_header ...
Additional context
I notice in the
charts/ceph-csi-rbd/values.yaml
has acsi-snapshotter
related default setting:enableVolumeGroupSnapshots: false
.I am not sure what the relationship is between
enable-volume-group-snapshots
andenableVolumeGroupSnapshots: false
, hope this information can help.The text was updated successfully, but these errors were encountered: