Skip to content

Commit

Permalink
s|github.com/GoogleCloudPlatform/kubernetes|github.com/kubernetes/kub…
Browse files Browse the repository at this point in the history
…ernetes|
  • Loading branch information
eparis committed Sep 3, 2015
1 parent d4145fb commit a7118ba
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ done automatically based on statistical analysis and thresholds.
* Provide a concrete proposal for implementing auto-scaling pods within Kubernetes
* Implementation proposal should be in line with current discussions in existing issues:
* Scale verb - [1629](http://issue.k8s.io/1629)
* Config conflicts - [Config](https://github.com/GoogleCloudPlatform/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes)
* Config conflicts - [Config](https://github.com/kubernetes/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes)
* Rolling updates - [1353](http://issue.k8s.io/1353)
* Multiple scalable types - [1624](http://issue.k8s.io/1624)

Expand Down
2 changes: 1 addition & 1 deletion deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ Apart from the above, we want to add support for the following:

## References

- https://github.com/GoogleCloudPlatform/kubernetes/issues/1743 has most of the
- https://github.com/kubernetes/kubernetes/issues/1743 has most of the
discussion that resulted in this proposal.


Expand Down
8 changes: 4 additions & 4 deletions horizontal-pod-autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ HorizontalPodAutoscaler object will be bound with exactly one Scale subresource
autoscaling associated replication controller/deployment through it.
The main advantage of such approach is that whenever we introduce another type we want to auto-scale,
we just need to implement Scale subresource for it (w/o modifying autoscaler code or API).
The wider discussion regarding Scale took place in [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629).
The wider discussion regarding Scale took place in [#1629](https://github.com/kubernetes/kubernetes/issues/1629).

Scale subresource will be present in API for replication controller or deployment under the following paths:

Expand Down Expand Up @@ -192,7 +192,7 @@ The autoscaler will be implemented as a control loop.
It will periodically (e.g.: every 1 minute) query pods described by ```Status.PodSelector``` of Scale subresource,
and check their average CPU or memory usage from the last 1 minute
(there will be API on master for this purpose, see
[#11951](https://github.com/GoogleCloudPlatform/kubernetes/issues/11951).
[#11951](https://github.com/kubernetes/kubernetes/issues/11951).
Then, it will compare the current CPU or memory consumption with the Target,
and adjust the count of the Scale if needed to match the target
(preserving condition: MinCount <= Count <= MaxCount).
Expand Down Expand Up @@ -265,9 +265,9 @@ Our design is in general compatible with them.
and then turned-on when there is a demand for them.
When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler
to create a new pod.
Discussed in [#3247](https://github.com/GoogleCloudPlatform/kubernetes/issues/3247).
Discussed in [#3247](https://github.com/kubernetes/kubernetes/issues/3247).
* When scaling down, make more educated decision which pods to kill (e.g.: if two or more pods are on the same node, kill one of them).
Discussed in [#4301](https://github.com/GoogleCloudPlatform/kubernetes/issues/4301).
Discussed in [#4301](https://github.com/kubernetes/kubernetes/issues/4301).
* Allow rule based autoscaling: instead of specifying the target value for metric,
specify a rule, e.g.: “if average CPU consumption of pod is higher than 80% add two more replicas”.
This approach was initially suggested in
Expand Down
6 changes: 3 additions & 3 deletions job.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ for managing pod(s) that require running once to completion even if the machine
the pod is running on fails, in contrast to what ReplicationController currently offers.

Several existing issues and PRs were already created regarding that particular subject:
* Job Controller [#1624](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624)
* New Job resource [#7380](https://github.com/GoogleCloudPlatform/kubernetes/pull/7380)
* Job Controller [#1624](https://github.com/kubernetes/kubernetes/issues/1624)
* New Job resource [#7380](https://github.com/kubernetes/kubernetes/pull/7380)


## Use Cases
Expand Down Expand Up @@ -181,7 +181,7 @@ Below are the possible future extensions to the Job controller:
* Be able to limit the execution time for a job, similarly to ActiveDeadlineSeconds for Pods.
* Be able to create a chain of jobs dependent one on another.
* Be able to specify the work each of the workers should execute (see type 1 from
[this comment](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624#issuecomment-97622142))
[this comment](https://github.com/kubernetes/kubernetes/issues/1624#issuecomment-97622142))
* Be able to inspect Pods running a Job, especially after a Job has finished, e.g.
by providing pointers to Pods in the JobStatus ([see comment](https://github.com/kubernetes/kubernetes/pull/11746/files#r37142628)).

Expand Down

0 comments on commit a7118ba

Please sign in to comment.