-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check if transition from unordered-upgrade to ordered upgrade is possible #744
Comments
There are 2 possible ways to do it:
@qbarrand @ybettan @mresvanis WDYT? |
I think that if we wish to continue to support both versioned-modules and non-versioned modules then we should have an easy way for customers to move from one to another, therefore, I would go with the Should we have a way to go back from a versioned-modules to a non-versioned-module as well? |
I don't see a use-case for it right now, but in case it will be requested - the solution will probably be identical, if not even simplier |
I see option 2 as very complex for little benefits. spec:
selector:
some-label: some-value Labeling nodes with the version label can be as simple as: kubectl label node \
-l some-label==some-value \
kmm.node.kubernetes.io/version-module.${ns}.${name}=${version} Adding the version label to already targeted nodes sounds very straightforward to me, hence I would go with option 1. |
We can start with option 1 for simplicity and if the uses raise with time, re-discuss option 2 for a better UX. |
ExendedResources are not updated all the time, but in intervals of 1 minutes, i think. I hope it is enough time for new DevicePlugin pod to come up. Of course, new workloads can not be scheduled during the downtime, since scheduling may involve DevicePlugin's pods, but i think that customers can handle this requirement. On the other hand, we can update the GC code to delete the "unversioned" DevicePlugin DS only after the "versioned one is running, but in that case customer must make sure that 2 device plugin pods can run on the same node |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Currently, if customer wishes to use ordered upgrade, he needs to create a Module with a Version field and set the appropriate labels on the nodes. We don''t allow adding the Version field to the Module , if the Module already exists in the cluster.
We need to check if it is possible to allow it, since in v2 we don't have a running ModuleLoader daemonset.
In case we can support such scenario it will ease the adoption of the ordered-upgrade on the existing customer and will help with KMM upgrade support for day1/day0 modules
The text was updated successfully, but these errors were encountered: