forked from kubernetes/community
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request kubernetes#130 from jberkus/jberkus_summit_notes
Added Developer notes in a subfolder
- Loading branch information
Showing
6 changed files
with
107 additions
and
0 deletions.
There are no files selected for viewing
File renamed without changes.
File renamed without changes.
48 changes: 48 additions & 0 deletions
48
community/developer-summit-2016/application_service_definition_notes.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
# Service/Application Definition | ||
|
||
We think we need to help out the developers in how do we organize our services and how do I define them nicely and deploy on our orchestrator of choice. Writing the Kube files is a steep learning curve. So can we have something which is a little bit easier? | ||
|
||
Helm solves one purpose for this. | ||
|
||
Helm contrib: one of the things folks as us is they start from a dockerfile, and they want to have a workflow where they go from dockerfile-->imagebuild-->registry-->resource def. | ||
|
||
There are different ways to package applications. There's the potential for a lot of fragmentation in multi-pod application definitions. Can we create standards here? | ||
|
||
We want to build and generate manifests with one tool. We want "fun in five" that is have it up and running in five minutes or less. | ||
|
||
Another issue is testing mode; currently production-quality Helm charts don't really work on minikube,. There's some issues around this which we know about. We need dummy PVCs, LoadBalancer, etc. Also DNS and Ingress. | ||
|
||
We need the 80% case, Fabric8 is a good example of this. We need a good set of boundary conditions so that the new definition doesn't get bigger than the Kube implementation. Affinity/placement is a good example of "other 20%". | ||
|
||
We also need to look at how to get developer feedback on this so that we're building what they need. Pradeepto did a comparison of Kompose vs. Docker Compose for simplicity/usability. | ||
|
||
One of the things we're discussing the Kompose API. We want to get rid of this and supply something which people can use directly with kuberntes. A bunch of shops only have developers. Someone asked though what's so complicated with Kube definitions. Have we identified what gives people trouble with this? We push too many concepts on developers too quickly. We want some high-level abstract types which represent the 95% use case. Then we could decompose these to the real types. | ||
|
||
What's the gap between compose files and the goal? As an example, say you want to run a webserver pod. You have to deal with ingress, and service, and replication controller, and a bunch of other things. What's the equivalent of "docker run" which is easy to get. The critical thing is how fast you can learn it. | ||
|
||
We also need to have reversability so that if you use compose you don't have to edit the kube config after deployment, you can still use the simple concepts. The context of the chart needs to not be lost. | ||
|
||
There was discussion of templating applications. Person argued that it's really a type system. Erin suggested that it's more like a personal template, like the car seat configuration. | ||
|
||
There's a need to let developers work on "their machine" using the same spec. Looking through docker-compose, it's about what developers want, not what kubernetes wants. This needs to focus on what developers know, not the kube objects. | ||
|
||
Someone argued that if we use deployments it's really not that complex. We probably use too much complexity in our examples. But if we want to do better than docker-compose, what does it look like? Having difficulty imagining what that is. | ||
|
||
Maybe the best approach is to create a list of what we need for "what is my app" and compare it with current deployment files. | ||
|
||
There was a lot of discussion of what this looks like. | ||
|
||
Is this different from what the PAASes already do? It's not that different, we want something to work with core kubernetes, and also PAASes are opinionated in different ways. | ||
|
||
Being able to view an application as a single unifying concept is a major desire. Want to click "my app" and see all of the objects associated with it. It would be an overlay on top of Kubernetes, not something in core. | ||
|
||
One pending feature is that you can't look up different types of controllers in the API, that's going to be fixed. Another one is that we can't trace the depenences; helm doesn't label all of the components deployed with the app. | ||
|
||
Need to identify things which are missing in core kubernetes, if there are any. | ||
|
||
## Action Items: | ||
|
||
* Reduce the verbosity of injecting configmaps. We want to simplify the main kubernetes API. For example, there should be a way to map all variables to ENV as one statement. | ||
* Document where things are hard to understand with deployments. | ||
* Document where things don't work with minikube and deployments. | ||
* Document what's the path from minecraft.jar to running it on a kubernetes cluster? |
21 changes: 21 additions & 0 deletions
21
community/developer-summit-2016/cluster_federation_notes.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Cluster Federation | ||
|
||
There's a whole bunch of reasons why federation is interesting. There's HA, there's geographic locality, there's just managing very large clusters. Use cases: | ||
|
||
* HA | ||
* Hybrid Cloud | ||
* Geo/latency | ||
* Scalability (many large clusters instead of one gigantic one) | ||
* visibility of multiple clusters | ||
|
||
You don't actually need federation for geo-location now, but it helps. The mental model for this is kind of like Amazon AZ or Google zones. Sometimes we don't care where a resource is but sometimes we do. Sometimes you want specific policy control, like regulatory constraints about what can run where. | ||
|
||
From the enterprise point of view, central IT is in control and knowledge of where stuff gets deployed. Bob thinks it would be a very bad idea for us to try to solve complex policy ideas and enable them, it's a tar pit. We should just have the primitives of having different regions and be able to say what goes where. | ||
|
||
Currently, you either do node labelling which ends up being complex and dependant on discipline. Or you have different clusters and you don't have common namespaces. Some discussion of Intel proposal for cluster metadata. | ||
|
||
Bob's mental model is AWS regions and AZs. For example, if we're building a big cassandra cluster, and you want to make sure that nodes aren't all in the same zone. | ||
|
||
Quinton went over a WIP implementation for applying policies, with a tool which applies policy before resource requests go to the scheduler. It uses an open-source policy language, and labels on the request. | ||
|
||
Notes interrupted here, hopefully other members will fill in. |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# StatefulSets Session | ||
|
||
Topics to talk about: | ||
* local volumes | ||
* requests for the storage sig | ||
* reclaim policies | ||
* Filtering APIs for scheduler | ||
* Data locality | ||
* State of the StateFulSet | ||
* Portable IPs | ||
* Sticky Regions | ||
* Renaming Pods | ||
|
||
## State of the StatefulSet | ||
|
||
1.5 will come out soon, we'll go beta for StatefulSets in that one. One of the questions is what are the next steps for Statefulsets? One thing is a long beta, so that we know we can trust statefulsets and they're safe. | ||
|
||
Missed some discussion here about force deletion. | ||
|
||
The pod isn't done until the kubelet says it's done. The issue is what happens when we have a netsplit, because the master doesn't know what's happening with the pods. In the future we'll maybe add some kind of fencer to make sure that they can't rejoin. Fencing is probably a topic for the Bare-Metal Sig. | ||
|
||
Are we going to sacrifice availability for consistency? We won't explicitly take actions which aren't safe automatically. Question: should the kubelet delete automatically if it can't contact the master? No, because it can't contact the master to say it did it. | ||
|
||
When are we going to finish the rename from PetSet to StatefulSet? The PR is merged for renaming, but the documentation changes aren't. | ||
|
||
Storage provisioning? The assumption is that you will be able to preallocate a lot of storage for dynamic storage so that you can stamp out PVCs. If dynamic volumes aren't simple to use this is a lot more annoying. | ||
|
||
Building initial quorums issue? | ||
|
||
It would be great to have a developer storage class which ties back to a fake NFS. For testing and dev. The idea behind local volumes is that it should be easy to create throwaway storage on local disk. So that you can write things which run on every kube cluster. | ||
|
||
Will there be a API for the application? To communicate members joining and leaving. Answer today is that's what the KubeAPI is for. | ||
|
||
The hard problem is configchange. You can't do config change unless you bootstrap it correctly. If kube is changing things under me I can't maintant quorum (as an app). This happens when expanding the set of nodes. You need to figure out who's in and who's out. | ||
|
||
Where does the glue software which relates the statefulset to the application? But different applications handle things like consensus and quorum very differently. What about notifying the service that you're available for traffic. Example for this with etcd with readiness vs. membership service. You can have two states, one where the node is ready, and one where the application is ready. Readiness vs. liveness check could differentiate? | ||
|
||
Is rapid spin-up a real issue? Nobody thinks so, |