-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable container move by setting a flag, instead of using the pmvmAccessCommodity #91
Comments
The scope is more than vmpm_access commodity. For daemonSet and static pod, we make the pod not monitored to avoid it to move. We want to address that by using the flag as well. |
For pods created by Job, should we also disable the move and resize? Because these kinds of pods are expected to live a short time. "Jobs are complementary to Replication Controllers. A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job manages pods that are expected to terminate (e.g. batch jobs)." |
I agree. For pod from Jobs, we can provide value only when we have initial placement support. |
I assume this flag is not configurable, right? It always disables move, resize of pods created by Job, DaemonSet. Also, make sure to take a look at Stateful Set, does it make sense to move or resize and whether those actions are disruptive. |
yea, need do more tests about StatefulSet, and have a better understanding of its behaviors. |
…il in namespaces with a LimitRange. (turbonomic#91) * [TRB-54994]: Fixed a few issues that could cause resize actions to fail in namespaces with a LimitRange. * Added comments indicating that we have updated code taken from upstream Kubernetes.
In current supply chain ( vm -> pod - > container -> app -> vapp), Container entities are bonded to the hosting pod, and should not be moved. We implement this constrain by using CmmodityDTO_VMPM_ACCESS.
TODO: There should be some other way, by setting a flag, to disable the move action for container entities.
The text was updated successfully, but these errors were encountered: