├── contributors ├── design-proposals │ ├── images │ │ └── .gitignore │ ├── clustering │ │ ├── .gitignore │ │ ├── static.png │ │ ├── dynamic.png │ │ ├── static.seqdiag │ │ ├── Dockerfile │ │ ├── dynamic.seqdiag │ │ ├── README.md │ │ └── Makefile │ ├── pleg.png │ ├── pod-cache.png │ ├── architecture.dia │ ├── architecture.png │ ├── node-allocatable.png │ ├── ubernetes-design.png │ ├── volume-snapshotting.png │ ├── Kubemark_architecture.png │ ├── ubernetes-scheduling.png │ ├── monitoring_architecture.png │ ├── ubernetes-cluster-state.png │ ├── federation-high-level-arch.png │ ├── high-availability.md │ ├── kubelet-hypercontainer-runtime.md │ ├── runtimeconfig.md │ ├── README.md │ ├── scalability-testing.md │ ├── architecture.md │ ├── admission_control.md │ ├── kubelet-auth.md │ ├── volume-ownership-management.md │ ├── scheduler_extender.md │ ├── initial-resources.md │ ├── service-discovery.md │ └── principles.md └── devel │ ├── git_workflow.png │ ├── pr_workflow.dia │ ├── pr_workflow.png │ ├── gubernator-images │ ├── filterpage.png │ ├── skipping1.png │ ├── skipping2.png │ ├── filterpage1.png │ ├── filterpage2.png │ ├── filterpage3.png │ └── testfailures.png │ ├── local-cluster │ └── k8s-singlenode-docker.png │ ├── cli-roadmap.md │ ├── bazel.md │ ├── client-libraries.md │ ├── logging.md │ ├── getting-builds.md │ ├── go-code.md │ ├── on-call-rotations.md │ ├── collab.md │ ├── instrumentation.md │ ├── profiling.md │ ├── cherry-picks.md │ ├── generating-clientset.md │ ├── issues.md │ ├── README.md │ ├── community-expectations.md │ ├── on-call-user-support.md │ ├── updating-docs-for-feature-changes.md │ ├── kubelet-cri-networking.md │ ├── adding-an-APIGroup.md │ ├── godep.md │ ├── pull-requests.md │ ├── automation.md │ └── owners.md ├── archive ├── README.md └── sig-configuration │ └── README.md ├── sig-apps ├── minutes │ ├── README.md │ ├── 2016-05-25.md │ ├── 2016-07-06.md │ ├── 2016-07-20.md │ ├── 2016-05-18.md │ ├── 2016-06-29.md │ ├── 2016-06-22.md │ ├── 2016-06-15.md │ ├── 2016-07-13.md │ ├── 2016-08-03.md │ ├── 2016-06-08.md │ └── 2016-07-27.md ├── README.md └── agenda.md ├── community ├── developer-summit-2016 │ ├── k8sDevSummitSchedule.pdf │ ├── cluster_federation_notes.md │ ├── KubDevSummitVoting.md │ ├── statefulset_notes.md │ ├── application_service_definition_notes.md │ └── Kubernetes_Dev_Summit.md ├── README.md ├── Kubernetes_1st_Bday.md └── fixit201606.md ├── sig-storage ├── 1.3-retrospective │ └── 2016-03-28_Storage-SIG-F2F_Notes.pdf ├── README.md └── contributing.md ├── sig-node └── README.md ├── sig-big-data └── README.md ├── sig-windows └── README.md ├── sig-contribx └── README.md ├── sig-instrumentation └── README.md ├── sig-autoscaling └── README.md ├── sig-cluster-lifecycle └── README.md ├── sig-api-machinery └── README.md ├── sig-service-catalog └── README.md ├── sig-rktnetes └── README.md ├── sig-scheduling └── README.md ├── sig-cluster-ops └── README.md ├── CONTRIBUTING.md ├── sig-docs └── README.md ├── sig-ui └── README.md ├── sig-openstack ├── SIG-members.md └── README.md ├── sig-federation └── README.md ├── sig-aws └── README.md ├── sig-scalability └── README.md ├── sig-auth └── README.md ├── sig-cli └── README.md ├── CLA.md ├── sig-network └── README.md ├── sig-testing └── README.md └── project-managers └── README.md /contributors/design-proposals/images/.gitignore: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/.gitignore: -------------------------------------------------------------------------------- 1 | DroidSansMono.ttf 2 | -------------------------------------------------------------------------------- /archive/README.md: -------------------------------------------------------------------------------- 1 | # Archive 2 | 3 | The archive contains SIG information for those no longer active. 4 | -------------------------------------------------------------------------------- /contributors/devel/git_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/git_workflow.png -------------------------------------------------------------------------------- /contributors/devel/pr_workflow.dia: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/pr_workflow.dia -------------------------------------------------------------------------------- /contributors/devel/pr_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/pr_workflow.png -------------------------------------------------------------------------------- /contributors/design-proposals/pleg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/pleg.png -------------------------------------------------------------------------------- /contributors/design-proposals/pod-cache.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/pod-cache.png -------------------------------------------------------------------------------- /contributors/design-proposals/architecture.dia: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/architecture.dia -------------------------------------------------------------------------------- /contributors/design-proposals/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/architecture.png -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/static.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/clustering/static.png -------------------------------------------------------------------------------- /contributors/design-proposals/node-allocatable.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/node-allocatable.png -------------------------------------------------------------------------------- /contributors/design-proposals/ubernetes-design.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/ubernetes-design.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/filterpage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/filterpage.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/skipping1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/skipping1.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/skipping2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/skipping2.png -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/dynamic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/clustering/dynamic.png -------------------------------------------------------------------------------- /contributors/design-proposals/volume-snapshotting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/volume-snapshotting.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/filterpage1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/filterpage1.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/filterpage2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/filterpage2.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/filterpage3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/filterpage3.png -------------------------------------------------------------------------------- /contributors/devel/gubernator-images/testfailures.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/gubernator-images/testfailures.png -------------------------------------------------------------------------------- /sig-apps/minutes/README.md: -------------------------------------------------------------------------------- 1 | [Minutes have moved to Google Docs](https://docs.google.com/document/d/1LZLBGW2wRDwAfdBNHJjFfk9CFoyZPcIYGWU7R1PQ3ng/edit#). 2 | -------------------------------------------------------------------------------- /community/developer-summit-2016/k8sDevSummitSchedule.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/community/developer-summit-2016/k8sDevSummitSchedule.pdf -------------------------------------------------------------------------------- /contributors/design-proposals/Kubemark_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/Kubemark_architecture.png -------------------------------------------------------------------------------- /contributors/design-proposals/ubernetes-scheduling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/ubernetes-scheduling.png -------------------------------------------------------------------------------- /contributors/design-proposals/monitoring_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/monitoring_architecture.png -------------------------------------------------------------------------------- /contributors/design-proposals/ubernetes-cluster-state.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/ubernetes-cluster-state.png -------------------------------------------------------------------------------- /contributors/devel/local-cluster/k8s-singlenode-docker.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/devel/local-cluster/k8s-singlenode-docker.png -------------------------------------------------------------------------------- /contributors/design-proposals/federation-high-level-arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/contributors/design-proposals/federation-high-level-arch.png -------------------------------------------------------------------------------- /sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen/community/master/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf -------------------------------------------------------------------------------- /sig-node/README.md: -------------------------------------------------------------------------------- 1 | [Meeting Notes](https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit?usp=sharing) 2 | 3 | [Hangout](https://plus.google.com/hangouts/_/google.com/sig-node-meetup?authuser=0) -------------------------------------------------------------------------------- /sig-big-data/README.md: -------------------------------------------------------------------------------- 1 | # NOTE: THE BIG DATA SIG IS INDEFINITELY SUSPENDED, IN FAVOR OF THE ["APPS" SIG](https://github.com/kubernetes/community/blob/master/sig-apps/README.md). 2 | 3 | [Old Meeting Notes](https://docs.google.com/document/d/1YhNLN39f5oZ4AHn_g7vBp0LQd7k37azL7FkWG8CEDrE/edit) 4 | -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/static.seqdiag: -------------------------------------------------------------------------------- 1 | seqdiag { 2 | activation = none; 3 | 4 | admin[label = "Manual Admin"]; 5 | ca[label = "Manual CA"] 6 | master; 7 | kubelet[stacked]; 8 | 9 | admin => ca [label="create\n- master-cert"]; 10 | admin ->> master [label="start\n- ca-root\n- master-cert"]; 11 | 12 | admin => ca [label="create\n- kubelet-cert"]; 13 | admin ->> kubelet [label="start\n- ca-root\n- kubelet-cert\n- master-location"]; 14 | 15 | kubelet => master [label="register\n- kubelet-location"]; 16 | } 17 | -------------------------------------------------------------------------------- /sig-windows/README.md: -------------------------------------------------------------------------------- 1 | # SIG Windows 2 | 3 | A special interest group for bringing Kubernetes support to Windows. 4 | 5 | ## Meeting 6 | * Bi-weekly: Tuesday 1:00 PM EST (10:00 AM PST) 7 | * Zoom link: [https://zoom.us/my/sigwindows](https://zoom.us/my/sigwindows) 8 | * Meeting Notes: https://docs.google.com/document/d/1Tjxzjjuy4SQsFSUVXZbvqVb64hjNAG5CQX8bK7Yda9w/edit#heading=h.kbz22d1yc431 9 | 10 | The meeting agenda and notes can be found [here](https://docs.google.com/document/d/1Tjxzjjuy4SQsFSUVXZbvqVb64hjNAG5CQX8bK7Yda9w/edit) 11 | 12 | -------------------------------------------------------------------------------- /sig-contribx/README.md: -------------------------------------------------------------------------------- 1 | #SIG Contribx 2 | ##Meeting: 3 | * Meetings: Wednesdays 9:30AM PST (biweekly and changing soon) 4 | * Zoom Link: https://zoom.us/j/7658488911 5 | * Check out the [Agenda and Minutes](https://docs.google.com/document/d/1qf-02B7EOrItQgwXFxgqZ5qjW0mtfu5qkYIF1Hl4ZLI/ )! 6 | 7 | ##Goals: 8 | * Improve the experience for contributors working on Kubernetes 9 | * Help people get involved in the kubernetes community 10 | 11 | ##Organizers: 12 | * Phillip Wittrock pwittroc@google.com, Google 13 | * Garrett Rodrigues grod@google.com, Google 14 | -------------------------------------------------------------------------------- /contributors/design-proposals/high-availability.md: -------------------------------------------------------------------------------- 1 | # High Availability of Scheduling and Controller Components in Kubernetes 2 | 3 | This document is deprecated. For more details about running a highly available 4 | cluster master, please see the [admin instructions document](https://github.com/kubernetes/kubernetes/blob/master/docs/admin/high-availability.md). 5 | 6 | 7 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/high-availability.md?pixel)]() 8 | 9 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-05-25.md: -------------------------------------------------------------------------------- 1 | # May 25, 2016 2 | 3 | * Intro by Michelle 4 | * Mike Metral from Rackspace demo 5 | * Recursively process configuration files with the -R flag 6 | * Discuss the survey 7 | * Gabe from Deis, Mike from Red Hat, and Doug Davis from IBM are willing to chip in. 8 | * How can we provide Prashanth and pet sets feedback? 9 | * We need some docs on how to use it with some examples (2) that you can copy and paste. 10 | * Lack of examples and docs is a blocker for new k8s feature adoption. 11 | 12 | Watch the [recording](https://vimeo.com/168816241) 13 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-07-06.md: -------------------------------------------------------------------------------- 1 | # July 6, 2016 2 | 3 | - Matt Farina gave an introduction and overview of the agenda 4 | - Chris Love gave a demo of his Cassandra cluster 5 | - It was a great example of PetSet in action 6 | - His demo application is located at https://github.com/k8s-for-greeks/gpmr 7 | - Matt Farina gave an update on the analysis of the sig-apps survey 8 | - He will reach out to those interested in helping analyze the data 9 | - It's not too late to volunteer to help. Contact @mattfarina for more. 10 | 11 | 12 | Watch the [recording](https://youtu.be/J-HWkEp8GcA). 13 | -------------------------------------------------------------------------------- /sig-instrumentation/README.md: -------------------------------------------------------------------------------- 1 | # SIG Instrumentation 2 | 3 | **Leads:** [@piosz (Piotr Szczesniak, Google)](https://github.com/piosz), [@fabxc (Fabian Reinartz, CoreOS)](https://github.com/fabxc) 4 | 5 | **Mailing List:** [kubernetes-sig-instrumentation](https://groups.google.com/forum/#!forum/kubernetes-sig-instrumentation) 6 | **Slack Channel:** [#sig-instrumentation](https://kubernetes.slack.com/messages/sig-instrumentation) 7 | **Meeting Notes:** [https://docs.google.com/document/d/1gWuAATtlmI7XJILXd31nA4kMq6U9u63L70382Y3xcbM/edit](https://docs.google.com/document/d/1gWuAATtlmI7XJILXd31nA4kMq6U9u63L70382Y3xcbM/edit) 8 | 9 | -------------------------------------------------------------------------------- /contributors/devel/cli-roadmap.md: -------------------------------------------------------------------------------- 1 | # Kubernetes CLI/Configuration Roadmap 2 | 3 | See github issues with the following labels: 4 | * [area/app-config-deployment](https://github.com/kubernetes/kubernetes/labels/area/app-config-deployment) 5 | * [component/kubectl](https://github.com/kubernetes/kubernetes/labels/component/kubectl) 6 | * [component/clientlib](https://github.com/kubernetes/kubernetes/labels/component/clientlib) 7 | 8 | 9 | 10 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cli-roadmap.md?pixel)]() 11 | 12 | -------------------------------------------------------------------------------- /sig-autoscaling/README.md: -------------------------------------------------------------------------------- 1 | This is the wiki page of Kubernetes Autoscaling SIG: a special interest group for autoscaling related topics. The scope of our SIG includes (but is not limited to): 2 | 3 | * autoscaling of clusters, 4 | * horizontal and vertical autoscaling of pods, 5 | * setting initial resources for pods, 6 | * topics related to monitoring pods and gathering their metrics (e.g.: Heapster) 7 | 8 | [Meeting Notes](https://docs.google.com/document/d/1RvhQAEIrVLHbyNnuaT99-6u9ZUMp7BfkPupT2LAZK7w/edit) 9 | 10 | [Hangout](https://plus.google.com/hangouts/_/google.com/k8s-autoscaling) 11 | 12 | Mailing list: kubernetes-sig-autoscaling@googlegroups.com -------------------------------------------------------------------------------- /archive/sig-configuration/README.md: -------------------------------------------------------------------------------- 1 | This group is for discussion about application configuration and deployment in Kubernetes. We use the label `area/app-config-deployment` to tag related issues and pull requests on Github. 2 | 3 | For Kubernetes cluster configuration and deployment, see [SIG-Cluster-Ops](SIG-Cluster-Ops). 4 | 5 | [Google group](https://groups.google.com/forum/#!forum/kubernetes-sig-config) 6 | 7 | [Slack](https://kubernetes.slack.com/messages/sig-configuration/) 8 | 9 | For access to the meeting agenda and notes, please join the Google group. 10 | * [Meeting Agenda and Notes](https://docs.google.com/document/d/1FZ_jiOdBZ_bfPD5Y3QMcfs2SCx956QLc9_E0QdR_l20/edit) 11 | 12 | -------------------------------------------------------------------------------- /community/README.md: -------------------------------------------------------------------------------- 1 | # Weekly Community Video Conference 2 | 3 | We have PUBLIC and RECORDED [weekly video meetings](https://zoom.us/my/kubernetescommunity) every Thursday at 10am US Pacific Time. You can [find the time in your timezone with this table](https://www.google.com/search?q=1000+am+in+pst). 4 | 5 | To be added to the calendar items, join this [google group](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat) for further instructions. 6 | 7 | If you have a topic you'd like to present or would like to see discussed, please propose a specific date on the Kubernetes Community Meeting [Working Document](https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#). 8 | -------------------------------------------------------------------------------- /sig-cluster-lifecycle/README.md: -------------------------------------------------------------------------------- 1 | # SIG Cluster Lifecycle 2 | 3 | **Leads:** [@lukemarsden (Luke Marsden, Weave)] (https://github.com/lukemarsden) 4 | 5 | **Slack Channel:** [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) 6 | 7 | **Mailing List:** [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) 8 | 9 | **Meetings:** 10 | - [Zoom Meeting](https://zoom.us/j/166836624) Tuesdays 9am (PDT) 11 | - [Notes](https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit) 12 | 13 | 14 | 15 | Mission 16 | ------- 17 | 18 | [Work-in-progress](https://docs.google.com/document/d/1E4cspBTKfzXGxlfMJRIpk-zC4CzoDz8VF_DyofcBNeo/edit) 19 | 20 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-07-20.md: -------------------------------------------------------------------------------- 1 | # July 20, 2016 2 | 3 | - Michelle Noorali gave an introduction and overview of the agenda 4 | - Janet Kuo gave an overview of Deployment features 5 | - See her [blog post](http://blog.kubernetes.io/2016/04/using-deployment-objects-with.html ) 6 | - See used [minikube](https://github.com/kubernetes/minikube) for the local cluster set up during her demo 7 | - Saad Ali gave an overview of Volume features and things to look forward to around Volumes 8 | - Check out his [presentation](https://docs.google.com/presentation/d/17w7GqwGE8kO9WPNAO1qC8NyS7dRw_oLBwqKznD9WqUs) 9 | - We are looking forward to the sig-apps survey results next week 10 | 11 | Watch the [recording](https://youtu.be/DrLGxkFdDNc?list=PLI1CvzwXi1cUVsxm2QBIyJgeCkf62ylun). 12 | -------------------------------------------------------------------------------- /sig-api-machinery/README.md: -------------------------------------------------------------------------------- 1 | Kubernetes Special Interest Group for API Machinery (SIG API Machinery) 2 | 3 | Links/Info: 4 | * Github team: https://github.com/orgs/kubernetes/teams/sig-api-machinery-misc (Use ‘@kubernetes/sig-api-machinery-misc’ on github to notify team members.) 5 | * Mailing list: https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery 6 | * Slack channel: https://kubernetes.slack.com/messages/sig-api-machinery/ 7 | * [Agenda/Mission Doc](https://goo.gl/x5nWrF) 8 | * [Hangout Link](https://staging.talkgadget.google.com/hangouts/_/google.com/kubernetes-sig) 9 | * Areas: apiserver, api registration and discovery, generic API CRUD semantics, admission control, encoding/decoding, conversion, defaulting, persistence layer (etcd), OpenAPI, third-party resource, garbage collection, client libraries 10 | -------------------------------------------------------------------------------- /sig-service-catalog/README.md: -------------------------------------------------------------------------------- 1 | # SIG Service Catalog 2 | 3 | **Leads:** Paul Morie (@pmorie, Red Hat), Chris Dutra (@dutronlabs, Apprenda) 4 | 5 | **Slack Channel:** [#sig-service-catalog](https://kubernetes.slack.com/messages/sig-service-catalog/). [Archive](http://kubernetes.slackarchive.io/sig-service-catalog/) 6 | 7 | **Mailing List:** [kubernetes-sig-service-catalog](https://groups.google.com/forum/#!forum/kubernetes-sig-service-catalog) 8 | 9 | **Meetings:** [Mondays, 4pm Eastern / 1pm Pacific time](https://zoom.us/j/7201225346) 10 | 11 | [**Meeting Agenda**](http://goo.gl/A0m24V) 12 | 13 | ### SIG Mission 14 | 15 | Mission: to develop a Kubernetes API for the CNCF service broker and Kubernetes broker implementation. 16 | 17 | Incubator repo at [https://github.com/kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog). -------------------------------------------------------------------------------- /sig-rktnetes/README.md: -------------------------------------------------------------------------------- 1 | # The Mission 2 | `sig-rktnetes` is focused on topics related to the integration between Kubernetes and the rkt container runtime, more succinctly referred to as "rktnetes". 3 | 4 | This includes: 5 | - Kubernetes and rkt integration 6 | - Performance and scalability 7 | - Images, package management 8 | - Rktnetes deployments 9 | - Issues related to monitoring (cAdvsor, Heapster, etc.) 10 | 11 | The main objective of this SIG is to establish rkt as the go-to/default runtime choice for a Kubernetes cluster. 12 | 13 | # Resources 14 | 15 | [Meeting Notes](https://docs.google.com/document/d/1otDQ2LSubtBUaDfdM8ZcSdWkqRHup4Hqt1VX1jSxh6A/edit?usp=sharing) 16 | 17 | [Zoom Meeting](https://zoom.us/j/830298957) 18 | 19 | [Google Group](https://groups.google.com/forum/#!forum/kubernetes-sig-rktnetes) 20 | 21 | Mailing List: kubernetes-sig-rktnetes@googlegroups.com 22 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-05-18.md: -------------------------------------------------------------------------------- 1 | # May 18, 2016 2 | 3 | * The first SIG Apps meeting 4 | * Intro from Michelle 5 | * Discussion on the future of the SIG: 6 | * Mike from Rackspace offered to do a demo of the recursive functionality ([issue](https://github.com/kubernetes/kubernetes/pull/25110)) 7 | * Idea: solicit the community for cases where their use cases aren't met. 8 | * Demo from Prashanth B on PetSets ([issue](https://github.com/kubernetes/kubernetes/issues/260)) 9 | * Supposed to make deploying and managing stateful apps easier. Will be alpha in 1.3. 10 | * Zookeeper, mysql, cassandra are example apps to run in this 11 | * A group of elements in the identity of a thing. Initially storage and networking (including name) 12 | * Scale up and down is handled differently as well. 13 | * Init containers just went in. Right now most containers start at same time. Init containers let you do one first to initialize the environment. 14 | -------------------------------------------------------------------------------- /sig-scheduling/README.md: -------------------------------------------------------------------------------- 1 | # sig-scheduling 2 | 3 | ## Organizers 4 | 5 | - [David Oppenheimer](https://github.com/davidopp) / Google 6 | - [Tim St. Clair](https://github.com/timothysc) / RedHat 7 | 8 | ## Meetings 9 | 10 | - [Meeting agenda](https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit) 11 | Add your topics here. This doc lists regular meeting times and contact details 12 | for joining the video conference. _NOTE: Since meetings are occasionally 13 | canceled due to lack of content, if you have topics to discuss please also 14 | write a message to the list._ 15 | - [Meeting link](https://zoom.us/zoomconference?m=rN2RrBUYxXgXY4EMiWWgQP6Vslgcsn86) 16 | 17 | ## Communication 18 | 19 | - [Slack](https://kubernetes.slack.com/messages/sig-scheduling) ([archive](http://kubernetes.slackarchive.io/sig-scheduling)) 20 | - [Google Group](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling) 21 | -------------------------------------------------------------------------------- /sig-cluster-ops/README.md: -------------------------------------------------------------------------------- 1 | SIG Cluster Ops 2 | =============== 3 | 4 | [Meeting Notes](https://docs.google.com/document/d/1IhN5v6MjcAUrvLd9dAWtKcGWBWSaRU8DNyPiof3gYMY/edit#) 5 | 6 | Meeting Time: Thursday @ 1 PT (so we can end the week with Ops) 7 | 8 | [Google Group](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops) 9 | 10 | [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops/) 11 | 12 | Co-leaders 13 | * Rob Hirschfeld (@zehicle, rob@rackn.com) 14 | * Mike Danese (@mikedanese) 15 | 16 | Mission 17 | ------- 18 | 19 | SIG Cluster Ops Mission is to promote operability and interoperability of Kubernetes clusters. We focus on shared operations practices for Kubernetes clusters with a goal to make kubernetes broadly accessible with a common baseline reference. We also organize operators as a sounding board and advocacy group. 20 | 21 | 22 | [Hangouts](https://plus.google.com/hangouts/_/google.com/sig-cluster-ops), every Thursday at 1PM PST 23 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing guidelines 2 | 3 | This project is for documentation about the community. To contribute to one of 4 | the Kubernetes projects please see the contribution guide for that project. 5 | 6 | ## How To Contribute 7 | 8 | The contributions here follow a [pull request](https://help.github.com/articles/using-pull-requests/) model with some additional process. 9 | The process is as follows: 10 | 11 | 1. Submit a pull request with the requested change. 12 | 2. Another person, other than a Special Interest Group (SIG) owner, can mark it Looks Good To Me (LGTM) upon successful review. Otherwise feedback can be given. 13 | 3. A SIG owner can merge someone else's change into their SIG documentation immediate. 14 | 4. Someone cannot immediately merge their own change. To merge your own change wait 24 hours during the week or 72 hours over a weekend. This allows others the opportunity to review a change. 15 | 16 | _Note, the SIG Owners decide on the layout for their own sub-directory structure._ 17 | -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | FROM debian:jessie 16 | 17 | RUN apt-get update 18 | RUN apt-get -qy install python-seqdiag make curl 19 | 20 | WORKDIR /diagrams 21 | 22 | RUN curl -sLo DroidSansMono.ttf https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/DroidSansMono.ttf 23 | 24 | ADD . /diagrams 25 | 26 | CMD bash -c 'make >/dev/stderr && tar cf - *.png' -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/dynamic.seqdiag: -------------------------------------------------------------------------------- 1 | seqdiag { 2 | activation = none; 3 | 4 | 5 | user[label = "Admin User"]; 6 | bootstrap[label = "Bootstrap API\nEndpoint"]; 7 | master; 8 | kubelet[stacked]; 9 | 10 | user -> bootstrap [label="createCluster", return="cluster ID"]; 11 | user <-- bootstrap [label="returns\n- bootstrap-cluster-uri"]; 12 | 13 | user ->> master [label="start\n- bootstrap-cluster-uri"]; 14 | master => bootstrap [label="setMaster\n- master-location\n- master-ca"]; 15 | 16 | user ->> kubelet [label="start\n- bootstrap-cluster-uri"]; 17 | kubelet => bootstrap [label="get-master", return="returns\n- master-location\n- master-ca"]; 18 | kubelet ->> master [label="signCert\n- unsigned-kubelet-cert", return="returns\n- kubelet-cert"]; 19 | user => master [label="getSignRequests"]; 20 | user => master [label="approveSignRequests"]; 21 | kubelet <<-- master [label="returns\n- kubelet-cert"]; 22 | 23 | kubelet => master [label="register\n- kubelet-location"] 24 | } 25 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-06-29.md: -------------------------------------------------------------------------------- 1 | # June 29, 2016 2 | 3 | - Bart Spaans gave a demo and talked about [KubeFuse](https://github.com/opencredo/kubefuse) 4 | - Work with your Kubernetes cluster like it's a file system 5 | - You can also edit resources and update them in cluster on writes 6 | - Find more details in this [post](https://opencredo.com/introducing-kubefuse-file-system-kubernetes/) 7 | - Contact Bart on [Twitter](https://twitter.com/Work_of_Bart): @Work_of_Bart 8 | - The DRUD team showed off how they use Kubernetes Jobs 9 | - DRUD is a drupal and wordpress management platform built on Kubernetes 10 | - KubeJobWatcher can be found [here](https://github.com/drud/KubeJobWatcher) 11 | - For help or questions, reach out to Erin at erin@newmediadenver.com 12 | - Matt Farina is looking for people to help analyze the data from the sig-apps survey. Volunteers are welcome to reach out to him on [Twitter](http://twitter.com/mattfarina) or Slack: @mattfarina. 13 | 14 | Watch the [recording](https://youtu.be/6wA_YdgWL8o). 15 | -------------------------------------------------------------------------------- /sig-docs/README.md: -------------------------------------------------------------------------------- 1 | # SIG Docs 2 | 3 | A Special Interest Group for documentation, doc processes, and doc publishing for Kubernetes. 4 | 5 | ## Meeting: 6 | * Meetings: Tuesdays @ 10:30AM PST 7 | * Zoom Link: https://zoom.us/j/4730809290 8 | * Check out the [Agenda and Minutes](https://docs.google.com/document/d/1Ds87eRiNZeXwRBEbFr6Z7ukjbTow5RQcNZLaSvWWQsE/edit) 9 | 10 | ## Comms: 11 | * Slack: [kubernetes.slack.com #sig-docs](https://kubernetes.slack.com/messages/sig-docs/) 12 | * Google Group: [kubernetes-sig-docs@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) 13 | 14 | ## Goals: 15 | * Discuss documentation and docs issues for kubernetes 16 | * Plan docs releases for k8s 17 | * Suggest improvements to developer onboarding where we see friction 18 | * Identify and implement ways to get documentation feedback and metrics 19 | * Help people get involved in the kubernetes community 20 | 21 | ## Organizers: 22 | * Jared Bhatti , Google 23 | * Devin Donnelly , Google 24 | -------------------------------------------------------------------------------- /sig-ui/README.md: -------------------------------------------------------------------------------- 1 | #SIG UI 2 | All things Kubernetes UI related. 3 | 4 | Efforts are centered around [Kubernetes Dashboard](https://github.com/kubernetes/dashboard): a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. 5 | 6 | 7 | ####Leads: 8 | * Piotr Bryk (@bryk, _Google_) and Dan Romlein (@danielromlein, _Apprenda_) 9 | 10 | ####Meetings: 11 | * Wednesdays 4PM CEST 12 | * [Google Group / mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-ui) 13 | * [Meeting Notes](https://docs.google.com/document/d/1PwHFvqiShLIq8ZpoXvE3dSUnOv1ts5BTtZ7aATuKd-E/edit?usp=sharing) 14 | * [Calendar](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw) 15 | * [Zoom Video Call](https://zoom.us/j/863303289) 16 | 17 | ####Slack: 18 | * [#sig-ui](https://kubernetes.slack.com/messages/sig-ui/) 19 | * [Archive](http://kubernetes.slackarchive.io/sig-ui/) 20 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-06-22.md: -------------------------------------------------------------------------------- 1 | # June 22, 2016 2 | 3 | - Michelle Noorali gave a quick introduction 4 | - Antoine Legrand did a demo of [KPM]((https://github.com/kubespray/kpm) 5 | - Matt Farina gave an update on the SIG-apps survey 6 | - The survey ends this Friday, so there are only two more days to collect data. 7 | - The survey can be found [here](http://goo.gl/forms/VFbJ0fjJYYSEgwND2). 8 | - Please help us share the survey and get as much feedback on running apps in kubernetes as possible! 9 | - If you'd like to help analyze the data, please contact @mattfarina. 10 | - We discussed topics of interest with the group. 11 | - The sig-apps members are interested in seeing some standardization when it comes to deploying, managing, and operating applications in Kubernetes. There was discussion around putting together a post or some documentation about common trends to try to bring them to light. 12 | - There was also interest specifically around how people are actually using Kubernetes Jobs in actuality and seeing real-world examples of that. 13 | -------------------------------------------------------------------------------- /sig-openstack/SIG-members.md: -------------------------------------------------------------------------------- 1 | List of the OpenStack Special Interest Group team members. 2 | 3 | Use @kubernetes/sig-openstack to mention this team in comments. 4 | 5 | * [David Oppenheimer](https://github.com/davidopp) (owner) 6 | * [Steve Gordon](https://github.com/xsgordon) (SIG lead) 7 | * [Ihor Dvoretskyi](https://github.com/idvoretskyi) (SIG lead) 8 | * [Angus Lees](https://github.com/anguslees) 9 | * [Pengfei Ni](https://github.com/feiskyer) 10 | * [Joshua Harlow](https://github.com/harlowja) 11 | * [Stephen McQuaid](https://github.com/stevemcquaid) 12 | * [Huamin Chen](https://github.com/rootfs) 13 | * [David F. Flanders](https://github.com/DFFlanders) 14 | * [Davanum Srinivas](https://github.com/dims) 15 | * [Egor Guz](https://github.com/eghobo) 16 | * [Flavio Percoco Premoli](https://github.com/flaper87) 17 | * [Hongbin Lu](https://github.com/hongbin) 18 | * [Louis Taylor](https://github.com/kragniz) 19 | * [Jędrzej Nowak](https://github.com/pigmej) 20 | * [rohitagarwalla](https://github.com/rohitagarwalla) 21 | * [Russell Bryant](https://github.com/russellb) 22 | -------------------------------------------------------------------------------- /sig-federation/README.md: -------------------------------------------------------------------------------- 1 | This is a SIG focused on Federation of Kubernetes Clusters ("Ubernetes") and related topics. This includes: 2 | 3 | * Application resiliency against availability zone outages 4 | * Hybrid clouds 5 | * Spanning of multiple could providers 6 | * Application migration from private to public clouds (and vice versa) 7 | * ... and other similar subjects. 8 | 9 | ## Meetings: 10 | * Bi-weekly on Mondays @ 9am [America/Los_Angeles](http://time.is/Los_Angeles) (check [the calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles)) 11 | * Hangouts link: 12 | * [Working Group Notes](https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit?usp=sharing) 13 | 14 | ## Communication: 15 | * Slack: ([archive](http://kubernetes.slackarchive.io/sig-federation)) 16 | * Google Group: 17 | -------------------------------------------------------------------------------- /sig-openstack/README.md: -------------------------------------------------------------------------------- 1 | # OpenStack SIG 2 | 3 | This is the wiki page of the Kubernetes OpenStack SIG: a special interest group 4 | co-ordinating contributions of OpenStack-related changes to Kubernetes. 5 | 6 | Relevant [Issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen%20label%3Asig%2Fopenstack%20is%3Aissue) 7 | and [Pull Requests](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen%20is%3Apr%20label%3Asig%2Fopenstack) 8 | are tagged with the **sig-openstack** label. 9 | 10 | **Leads:** Steve Gordon (@xsgordon) and Ihor Dvoretskyi (@idvoretskyi) 11 | 12 | **Slack Channel:** [#sig-openstack](https://kubernetes.slack.com/messages/sig-openstack/). [Archive](http://kubernetes.slackarchive.io/sig-openstack/) 13 | 14 | **Mailing List:** [kubernetes-sig-openstack](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack) 15 | 16 | **Meetings:** Every second Wednesday at 2100 UTC (2 PM PDT / 5 PM EDT) - [Connect using Zoom](https://zoom.us/j/417251241), [Agenda/Notes](https://docs.google.com/document/d/1iAQ3LSF_Ky6uZdFtEZPD_8i6HXeFxIeW4XtGcUJtPyU/edit# 17 | ) 18 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-06-15.md: -------------------------------------------------------------------------------- 1 | # June 15, 2016 2 | 3 | - Michelle Noorali gave an introduction 4 | - Ben Hall, founder of katacoda, did a demo of [katacoda](https://www.katacoda.com/) 5 | - Katacoda is an interactive learning platoform for software engineers. 6 | - Ben showed off the current lessons on Kubernetes which are available now. 7 | - Ben also talked about how companies/people can make their own interactive lessons and embed them. 8 | - Contact Ben on [Twitter](https://twitter.com/Ben_Hall): @Ben_Hall 9 | - Eric Chiang demoed new Role Based Access Control features and capability in Kubernetes 1.3. 10 | - This demo was extensive and very helpful! Check out the [recording](https://www.youtube.com/watch?v=97VMYjfjWyg&list=PLI1CvzwXi1cUVsxm2QBIyJgeCkf62ylun&index=2) for the full scoop. 11 | - You can find the demo script/setup/example from the meeting in Eric's github repo [here](https://github.com/ericchiang/coreos-kubernetes/tree/sig-apps-demo/single-node). 12 | - Michelle Noorali shared the sig-apps survey which can be found [here](http://goo.gl/forms/n7vcisBv3l9IuTQr1). 13 | 14 | Watch the [recording](https://www.youtube.com/watch?v=97VMYjfjWyg&list=PLI1CvzwXi1cUVsxm2QBIyJgeCkf62ylun&index=2). 15 | -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/README.md: -------------------------------------------------------------------------------- 1 | This directory contains diagrams for the clustering design doc. 2 | 3 | This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). 4 | Assuming you have a non-borked python install, this should be installable with: 5 | 6 | ```sh 7 | pip install seqdiag 8 | ``` 9 | 10 | Just call `make` to regenerate the diagrams. 11 | 12 | ## Building with Docker 13 | 14 | If you are on a Mac or your pip install is messed up, you can easily build with 15 | docker: 16 | 17 | ```sh 18 | make docker 19 | ``` 20 | 21 | The first run will be slow but things should be fast after that. 22 | 23 | To clean up the docker containers that are created (and other cruft that is left 24 | around) you can run `make docker-clean`. 25 | 26 | ## Automatically rebuild on file changes 27 | 28 | If you have the fswatch utility installed, you can have it monitor the file 29 | system and automatically rebuild when files have changed. Just do a 30 | `make watch`. 31 | 32 | 33 | 34 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering/README.md?pixel)]() 35 | 36 | -------------------------------------------------------------------------------- /sig-aws/README.md: -------------------------------------------------------------------------------- 1 | # SIG AWS 2 | 3 | A Special Interest Group for maintaining, supporting, and using Kubernetes on AWS. 4 | All things K8s on AWS. We meet twice a month. 5 | 6 | ## Meeting: 7 | - Meetings: First and Third Friday of the month at 9:00AM PST for one hour 8 | - Zoom Link: [SIG AWS](https://zoom.us/my/k8ssigaws) 9 | 10 | ## Organizers: 11 | 12 | | Name | Contact | 13 | | ------------- | ------------- | 14 | | Justin Santa Barbara | [justinsb](https://github.com/justinsb) | 15 | | Chris Love | [chrislovecnm](https://github.com/chrislovecnm) | 16 | | Kris Childress | [kris-nova](https://github.com/kris-nova) | 17 | | Mackenzie Burnett | [mfburnett](https://github.com/mfburnett) | 18 | 19 | The meeting is open to all and we encourage you to join. Feel free to dial into the zoom call at your convenience. 20 | 21 | If you would like to be on the official calendar invite ping one of the organizers. 22 | 23 | ## Agenda 24 | 25 | ### October 21st, 2016 26 | 27 | - Introductions 28 | - Future of Kubernetes on AWS: a quick overview of where K8s is going 29 | - HA Kubernetes on AWS: a quick demo and roundtable 30 | - Setting our future: what do we want to talk about 31 | - Kubernetes Blockers on AWS: what is blocking you today on AWS 32 | -------------------------------------------------------------------------------- /sig-scalability/README.md: -------------------------------------------------------------------------------- 1 | # Scalability SIG 2 | 3 | **Leads:** Bob Wise (@countspongebob) and Joe Beda (@jbeda) 4 | 5 | **Slack Channel:** [#sig-scale](https://kubernetes.slack.com/messages/sig-scale/). [Archive](http://kubernetes.slackarchive.io/sig-scale/) 6 | 7 | **Mailing List:** [kubernetes-sig-scale](https://groups.google.com/forum/#!forum/kubernetes-sig-scale) 8 | 9 | **Meetings:** Thursdays at 9am pacific. Contact Joe or Bob for invite. [Notes](https://docs.google.com/a/bobsplanet.com/document/d/1hEpf25qifVWztaeZPFmjNiJvPo-5JX1z0LSvvVY5G2g/edit?usp=drive_web 10 | ) 11 | 12 | **Docs:** 13 | [Scaling And Performance Goals](goals.md) 14 | 15 | ### Scalability SLAs 16 | 17 | We officially support two different SLAs: 18 | 19 | 1. "API-responsiveness": 20 | 99% of all API calls return in less than 1s 21 | 22 | 1. "Pod startup time: 23 | 99% of pods (with pre-pulled images) start within 5s 24 | 25 | This should be valid on appropriate hardware up to a 1000 node cluster with 30 pods/node. We eventually want to expand that to 100 pods/node. 26 | 27 | For more details how do we measure those, you can look at: http://blog.kubernetes.io/2015_09_01_archive.html 28 | 29 | In the future we may want to add more SLAs (e.g. scheduler throughput), but we are not there yet. 30 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-07-13.md: -------------------------------------------------------------------------------- 1 | # July 13, 2016 2 | 3 | - Michelle Noorali gave an introduction and overview of the agenda 4 | - Christian Simon from JetStack gave a demo of [Kube-Lego](https://github.com/jetstack/kube-lego) 5 | - Kube-Lego automatically requests certificates for Kubernetes Ingress resources Let's Encrypt 6 | - Here are the [slides](https://docs.google.com/presentation/d/1t9FhmkGjchBmVzIlbGdfj_aSHcIRdg9qxBtIWIWdbnY/edit?usp=sharing) 7 | - Matt Farina gave an update on the analysis of the sig-apps survey 8 | - He has formed a team and should have results from the survey soon 9 | - A blog post with the results will follow 10 | - Clayton Coleman called to attention that there are many open discussions on PetSet 11 | - Now is the time to get involved in key decision making conversations especially if you have already implemented a PetSet 12 | - Follow along in the ["Pet Set in Beta" issue (#28718)](https://github.com/kubernetes/kubernetes/issues/28718#issuecomment-233291118) which also points to other relevant PetSet topics. 13 | - This is the issue for [Pet Set upgrades (#28706)](https://github.com/kubernetes/kubernetes/issues/28706#issuecomment-232831791) 14 | 15 | 16 | Watch the [recording](https://youtu.be/Cktqa_44DHs?list=PLI1CvzwXi1cUVsxm2QBIyJgeCkf62ylun). 17 | -------------------------------------------------------------------------------- /contributors/devel/bazel.md: -------------------------------------------------------------------------------- 1 | # Build with Bazel 2 | 3 | Building with bazel is currently experimental. Automanaged BUILD rules have the 4 | tag "automanaged" and are maintained by 5 | [gazel](https://github.com/mikedanese/gazel). Instructions for installing bazel 6 | can be found [here](https://www.bazel.io/versions/master/docs/install.html). 7 | 8 | To build docker images for the components, run: 9 | 10 | ``` 11 | $ bazel build //build/... 12 | ``` 13 | 14 | To run many of the unit tests, run: 15 | 16 | ``` 17 | $ bazel test //cmd/... //build/... //pkg/... //federation/... //plugin/... 18 | ``` 19 | 20 | To update automanaged build files, run: 21 | 22 | ``` 23 | $ ./hack/update-bazel.sh 24 | ``` 25 | 26 | **NOTES**: `update-bazel.sh` only works if check out directory of Kubernetes is "$GOPATH/src/k8s.io/kubernetes". 27 | 28 | To update a single build file, run: 29 | 30 | ``` 31 | $ # get gazel 32 | $ go get -u github.com/mikedanese/gazel 33 | $ # .e.g. ./pkg/kubectl/BUILD 34 | $ gazel -root="${YOUR_KUBE_ROOT_PATH}" ./pkg/kubectl 35 | ``` 36 | 37 | Updating BUILD file for a package will be required when: 38 | * Files are added to or removed from a package 39 | * Import dependencies change for a package 40 | 41 | 42 | 43 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/bazel.md?pixel)]() 44 | 45 | -------------------------------------------------------------------------------- /sig-auth/README.md: -------------------------------------------------------------------------------- 1 | Kubernetes SIG-Auth 2 | Kubernetes Special Interest Group for Authentication and Authorization 3 | 4 | 5 | Goals for this SIG: 6 | Discuss improvements Kubernetes Authorization and Authentication, and cluster security policy. 7 | Not in scope for this SIG: 8 | To report specific vulnerabilities in Kubernetes, please report using these instructions: http://kubernetes.io/v1.1/docs/reporting-security-issues.html 9 | 10 | General discussion of Linux security, or of containers is better directed to a non-Kubernetes mailing list. 11 | 12 | Proactive or general security discussion about Kubelet should go to kubernetes-sig-node@googlegroups.com. 13 | 14 | Proactive or general security discussion about the API server should go to kubernetes-sig-api-machinery@googlegroups.com. 15 | 16 | 17 | 18 | Links/info: 19 | * Mailing list: https://groups.google.com/forum/#!pendingmember/kubernetes-sig-auth/apply 20 | * Github team: https://github.com/orgs/kubernetes/teams/sig-auth (Use ‘@kubernetes/sig-auth’ on github to notify team members.) 21 | * Slack channel: https://kubernetes.slack.com/messages/sig-auth/ 22 | * Meeting frequency & time: Biweekly on Wednesdays, 11am Pacific (see agenda for which Wednesdays) 23 | * Agenda & meeting notes: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/edit# 24 | * Hangout link: [https://zoom.us/my/k8s.sig.auth](https://zoom.us/my/k8s.sig.auth) -------------------------------------------------------------------------------- /sig-cli/README.md: -------------------------------------------------------------------------------- 1 | # SIG CLI 2 | 3 | The Kubernetes Command Line Interface SIG (sig-cli) is a working group for the contributor community interested in `kubectl` and related tools. 4 | 5 | We focus on the development and standardization of the CLI [framework](https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl) and its dependencies, the establishment of [conventions](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/kubectl-conventions.md) for writing CLI commands, POSIX compliance, and improving the command line tools from a developer and devops user experience and usability perspective. 6 | 7 | ## Meetings: 8 | * Bi-weekly on Wednesdays @ 9am [America/Los_Angeles](http://time.is/Los_Angeles) (check [the calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles)) 9 | * Zoom Link: 10 | * Meeting [Agenda and Minutes](https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit?usp=sharing) 11 | 12 | ## Communication: 13 | * Slack: ([archive](http://kubernetes.slackarchive.io/sig-cli)) 14 | * Google Group: 15 | 16 | ## Organizers: 17 | * Fabiano Franz , Red Hat 18 | * Phillip Wittrock , Google 19 | * Tony Ado , Alibaba -------------------------------------------------------------------------------- /contributors/design-proposals/clustering/Makefile: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | FONT := DroidSansMono.ttf 16 | 17 | PNGS := $(patsubst %.seqdiag,%.png,$(wildcard *.seqdiag)) 18 | 19 | .PHONY: all 20 | all: $(PNGS) 21 | 22 | .PHONY: watch 23 | watch: 24 | fswatch *.seqdiag | xargs -n 1 sh -c "make || true" 25 | 26 | $(FONT): 27 | curl -sLo $@ https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/$(FONT) 28 | 29 | %.png: %.seqdiag $(FONT) 30 | seqdiag --no-transparency -a -f '$(FONT)' $< 31 | 32 | # Build the stuff via a docker image 33 | .PHONY: docker 34 | docker: 35 | docker build -t clustering-seqdiag . 36 | docker run --rm clustering-seqdiag | tar xvf - 37 | 38 | .PHONY: docker-clean 39 | docker-clean: 40 | docker rmi clustering-seqdiag || true 41 | docker images -q --filter "dangling=true" | xargs docker rmi 42 | -------------------------------------------------------------------------------- /contributors/devel/client-libraries.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes API client libraries 2 | 3 | ### Supported by [Kubernetes SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 4 | 5 | * [Go](https://github.com/kubernetes/client-go) 6 | * [Python] (https://github.com/kubernetes-incubator/client-python) 7 | 8 | ### User Contributed 9 | 10 | *Note: Libraries provided by outside parties are supported by their authors, not 11 | the core Kubernetes team* 12 | 13 | * [Clojure](https://github.com/yanatan16/clj-kubernetes-api) 14 | * [Java (OSGi)](https://bitbucket.org/amdatulabs/amdatu-kubernetes) 15 | * [Java (Fabric8, OSGi)](https://github.com/fabric8io/kubernetes-client) 16 | * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) 17 | * [Node.js](https://github.com/godaddy/kubernetes-client) 18 | * [Perl](https://metacpan.org/pod/Net::Kubernetes) 19 | * [PHP](https://github.com/devstub/kubernetes-api-php-client) 20 | * [PHP](https://github.com/maclof/kubernetes-client) 21 | * [Python](https://github.com/eldarion-gondor/pykube) 22 | * [Ruby](https://github.com/Ch00k/kuber) 23 | * [Ruby](https://github.com/abonas/kubeclient) 24 | * [Scala](https://github.com/doriordan/skuber) 25 | 26 | 27 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/client-libraries.md?pixel)]() 28 | 29 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-08-03.md: -------------------------------------------------------------------------------- 1 | # August 03, 2016 2 | 3 | * Intro by Michelle 4 | * [Ryan J](https://twitter.com/ryanj?lang=en) talked about "Defining Applications in Kubernetes(and Openshift). 5 | * His presentation can be found [here](bit.ly/sig-apps-openshift) and includes lots of helpful links. 6 | * He went through some OpenShift primitives that are not in Kubernetes but may show up soon if the community sees a need. 7 | * There was some in depth discussion about Deployments. 8 | * Kubernetes has some concepts that came out of OpenShift Deployment Config. 9 | * Q: What features are in Deployment Config that are high priority to push into Kubernetes Core at the moment? 10 | A: _(Clayton)_ Yes. Handling deployment failures at a high level, a generic idea for a trigger controller which watches another system for changes and makes updates to a Deployment, and hooks. 11 | * Ryan showed off OC which is a command line tool which is a wrapper for kubectl 12 | * Comment: One of the challenges Kubernetes faces today is that there is not a great way to extensibly pull in new chunks of APIs. 13 | * This is something that is actively being worked on today. This work is being discussed and worked on in [SIG-API-Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery) 14 | * Free O'Reilly EBooks can be found [here](http://gist-reveal.it/4ca683dff6cdb9601c495e27d4bb5289#/oreilly-ebooks) courtesy of Red Hat. 15 | 16 | 17 | Watch the [recording](https://youtu.be/8Gn44O6hSCw) 18 | -------------------------------------------------------------------------------- /CLA.md: -------------------------------------------------------------------------------- 1 | ### How do I sign the CNCF CLA? 2 | 3 | * To sign up as an individual or as an employee of a signed organization, go to https://identity.linuxfoundation.org/projects/cncf 4 | * To sign up as an organization, go to https://identity.linuxfoundation.org/node/285/organization-signup 5 | * To review the CNCF CLA, go to https://github.com/cncf/cla 6 | 7 | *** 8 | 9 | ### After you select one of the options above, please follow the instructions below: 10 | 11 | **Step 1**: You must sign in with GitHub. 12 | 13 | **Step 2**: If you are signing up as an employee, you must use your official person@organization.domain email address in the CNCF account registration page. 14 | 15 | 16 | **Step 3**: The email you use on your commits (https://help.github.com/articles/setting-your-email-in-git/) must match the email address you use when signing up for the CNCF account. 17 | 18 | ![CNCFCLA](http://i.imgur.com/tEk2x3j.png) 19 | ![CNCFCLA2](http://i.imgur.com/t3WAtrz.png) 20 | 21 | 22 | **Step 4**: Once the CLA sent to your email address is signed (or your email address is verified in case your organization has signed the CLA), you should be able to check that you are authorized in any new PR you create. 23 | 24 | ![CNCFCLA3](http://i.imgur.com/C5ZsNN6.png) 25 | 26 | **Step 5**: The status on your old PRs will be updated when any new comment is made on it. 27 | 28 | ### I'm having issues with signing the CLA. 29 | 30 | If you're facing difficulty with signing the CNCF CLA, please explain your case on https://github.com/kubernetes/kubernetes/issues/27796 and we (@sarahnovotny and @foxish), along with the CNCF will help sort it out. 31 | -------------------------------------------------------------------------------- /sig-apps/README.md: -------------------------------------------------------------------------------- 1 | # SIG Apps 2 | 3 | A Special Interest Group for deploying and operating applications in Kubernetes. 4 | 5 | We focus on the developer and devops experience of running applications in Kubernetes. We discuss how to define and run apps in Kubernetes, demo relevant tools and projects, and discuss areas of friction that can lead to suggesting improvements or feature requests. 6 | 7 | ## Meeting: 8 | * Meetings: Mondays 9:00AM PST 9 | * Zoom Link: [https://zoom.us/my/sig.apps](https://zoom.us/j/4526666954) 10 | * Check out the [Agenda and Minutes](https://docs.google.com/document/d/1LZLBGW2wRDwAfdBNHJjFfk9CFoyZPcIYGWU7R1PQ3ng/edit#)! _Note, for meetings on and prior to August 3, 2016 see the minutes in [their prior location](minutes/)._ 11 | 12 | ## Goals: 13 | * Discuss running applications in k8s 14 | * Discuss how to define and run apps in k8s (APIs, CLIs, SDKs, package management tools, etc.) 15 | * Suggest k8s features where we see friction 16 | * Be the voice of the people running applications into the k8s development (developers and devops) 17 | * Help people get involved in the kubernetes community 18 | * Show early features/demos of tools that make running apps easier 19 | 20 | ## Non-goals: 21 | * Our job is not to go implement stacks. We're helping people to help themselves. We will help connect people to the right folks * but we do not want to own a set of examples (as a group) 22 | * Do not endorse one particular tool 23 | * Do not pick which apps to run on top of the platform 24 | * Do not recommend one way to do things 25 | 26 | ## Organizers: 27 | * Michelle Noorali , Deis 28 | * Matt Farina , HPE 29 | -------------------------------------------------------------------------------- /community/developer-summit-2016/cluster_federation_notes.md: -------------------------------------------------------------------------------- 1 | # Cluster Federation 2 | 3 | There's a whole bunch of reasons why federation is interesting. There's HA, there's geographic locality, there's just managing very large clusters. Use cases: 4 | 5 | * HA 6 | * Hybrid Cloud 7 | * Geo/latency 8 | * Scalability (many large clusters instead of one gigantic one) 9 | * visibility of multiple clusters 10 | 11 | You don't actually need federation for geo-location now, but it helps. The mental model for this is kind of like Amazon AZ or Google zones. Sometimes we don't care where a resource is but sometimes we do. Sometimes you want specific policy control, like regulatory constraints about what can run where. 12 | 13 | From the enterprise point of view, central IT is in control and knowledge of where stuff gets deployed. Bob thinks it would be a very bad idea for us to try to solve complex policy ideas and enable them, it's a tar pit. We should just have the primitives of having different regions and be able to say what goes where. 14 | 15 | Currently, you either do node labelling which ends up being complex and dependant on discipline. Or you have different clusters and you don't have common namespaces. Some discussion of Intel proposal for cluster metadata. 16 | 17 | Bob's mental model is AWS regions and AZs. For example, if we're building a big cassandra cluster, and you want to make sure that nodes aren't all in the same zone. 18 | 19 | Quinton went over a WIP implementation for applying policies, with a tool which applies policy before resource requests go to the scheduler. It uses an open-source policy language, and labels on the request. 20 | 21 | Notes interrupted here, hopefully other members will fill in. 22 | -------------------------------------------------------------------------------- /sig-network/README.md: -------------------------------------------------------------------------------- 1 | # SIG Network 2 | 3 | A Special Interest Group for networking in Kubernetes. 4 | 5 | ## Meetings 6 | 7 | * Meetings: Every other Thursday @ 14:00 Pacific time 8 | * [SIG Network Calendar](https://calendar.google.com/calendar/embed?src=ODhmZTFsM3FmbjJiNnIxMWs4dW01YW03NmNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ) 9 | * Zoom Link: [https://zoom.us/j/5806599998](https://zoom.us/j/5806599998) 10 | * [Agenda and Minutes](https://docs.google.com/document/d/1_w77-zG_Xj0zYvEMfQZTQ-wPP4kXkpGD8smVtW_qqWM/edit) 11 | * Recorded meetings: TODO 12 | 13 | ## Google Group / Mailing List 14 | 15 | Join the [SIG network mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network) 16 | 17 | Most discussion happens on the Google Group and the weekly meetings help more quickly resolve issues 18 | that come up in PRs or on the group. 19 | 20 | ## Areas of Responsibility 21 | 22 | SIG Network is responsible for the following Kubernetes subsystems: 23 | 24 | - DNS 25 | - Ingress 26 | - Network plugins / CNI 27 | - Network Policy 28 | - Services / kube-proxy 29 | 30 | SIG Network is responsible for a number of issues and PRs. A summary can be found through GitHub search: 31 | 32 | * [SIG Network PRs](https://github.com/issues?utf8=%E2%9C%93&q=team%3Akubernetes%2Fsig-network+is%3Aopen+is%3Apr+) 33 | * [SIG Network Issues](https://github.com/issues?utf8=%E2%9C%93&q=team%3A%22kubernetes%2Fsig-network%22+is%3Aopen+is%3Aissue) 34 | 35 | ## Documents 36 | 37 | * [2017 Planning](https://docs.google.com/document/d/1fBxC36UCBnqY_w3m3TjdnXFsIT--GS6HmKb5o0nhkTk/edit#) 38 | 39 | ## Organizers: 40 | 41 | * Casey Davenport , Tigera 42 | * Dan Williams , Red Hat 43 | * Tim Hockin , Google 44 | -------------------------------------------------------------------------------- /sig-storage/README.md: -------------------------------------------------------------------------------- 1 | The Kubernetes Storage Special-Interest-Group (SIG) is a working group within the Kubernetes contributor community interested in storage and volume plugins. 2 | 3 | ### Meeting 4 | We hold a public meeting every two weeks, on Thursdays at 9 AM (PST). 5 | * The meeting is held on Zoom: https://zoom.us/j/614261834 6 | * Agenda and any notes from each meeting are captured in [this document](https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit?usp=sharing). 7 | * Contact saadali@google.com to be added to the calendar invite. 8 | * Recordings of past meetings: https://www.youtube.com/playlist?list=PLb1_clREIGYVaqvKaUsVyXxjfcSQDBnmB 9 | 10 | ### Contributing 11 | Interested in contributing to storage features in Kubernetes? [Please read our guide for new contributors](https://github.com/kubernetes/community/blob/master/sig-storage/contributing.md) 12 | 13 | ### Links 14 | * Public Slack Channel: https://kubernetes.slack.com/messages/sig-storage/details/ 15 | * Get invite to join here: http://slack.k8s.io/ 16 | * Google Group: https://groups.google.com/forum/#!forum/kubernetes-sig-storage 17 | * Github team: https://github.com/orgs/kubernetes/teams/sig-storage 18 | * Github issues: [link](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fstorage) 19 | * Documentation for currently supported volume plugins: http://kubernetes.io/docs/user-guide/volumes/ 20 | * Code for Volume plugins can be found [here](https://github.com/kubernetes/kubernetes/tree/master/pkg/volume). 21 | * Code for volume controllers can be found [here](https://github.com/kubernetes/kubernetes/tree/master/pkg/controller/volume/). 22 | * Code for Kubelet volume manager can be found [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/). 23 | -------------------------------------------------------------------------------- /contributors/design-proposals/kubelet-hypercontainer-runtime.md: -------------------------------------------------------------------------------- 1 | Kubelet HyperContainer Container Runtime 2 | ======================================= 3 | 4 | Authors: Pengfei Ni (@feiskyer), Harry Zhang (@resouer) 5 | 6 | ## Abstract 7 | 8 | This proposal aims to support [HyperContainer](http://hypercontainer.io) container 9 | runtime in Kubelet. 10 | 11 | ## Motivation 12 | 13 | HyperContainer is a Hypervisor-agnostic Container Engine that allows you to run Docker images using 14 | hypervisors (KVM, Xen, etc.). By running containers within separate VM instances, it offers a 15 | hardware-enforced isolation, which is required in multi-tenant environments. 16 | 17 | ## Goals 18 | 19 | 1. Complete pod/container/image lifecycle management with HyperContainer. 20 | 2. Setup network by network plugins. 21 | 3. 100% Pass node e2e tests. 22 | 4. Easy to deploy for both local dev/test and production clusters. 23 | 24 | ## Design 25 | 26 | The HyperContainer runtime will make use of the kubelet Container Runtime Interface. [Fakti](https://github.com/kubernetes/frakti) implements the CRI interface and exposes 27 | a local endpoint to Kubelet. Fakti communicates with [hyperd](https://github.com/hyperhq/hyperd) 28 | with its gRPC API to manage the lifecycle of sandboxes, containers and images. 29 | 30 | ![frakti](https://cloud.githubusercontent.com/assets/676637/18940978/6e3e5384-863f-11e6-9132-b638d862fd09.png) 31 | 32 | ## Limitations 33 | 34 | Since pods are running directly inside hypervisor, host network is not supported in HyperContainer 35 | runtime. 36 | 37 | ## Development 38 | 39 | The HyperContainer runtime is maintained by . 40 | 41 | 42 | 43 | 44 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/kubelet-hypercontainer-runtime.md?pixel)]() 45 | 46 | -------------------------------------------------------------------------------- /sig-testing/README.md: -------------------------------------------------------------------------------- 1 | # sig-testing 2 | 3 | The Kubernetes Testing SIG (sig-testing) is a working group within the Kubernetes contributor community interested in how we can most effectively test Kubernetes. We're interested specifically in making it easier for the community to run tests and contribute test results, to ensure Kubernetes is stable across a variety of cluster configurations and cloud providers. 4 | 5 | ## video conference 6 | 7 | We meet weekly on Tuesdays at 9:30am PDT (16:30 UTC) at [this zoom room](https://zoom.us/j/553910341) 8 | 9 | ## agenda 10 | 11 | We use [a public google doc](https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk) to track proposed agenda items, as well as take notes during meetings. 12 | 13 | The agenda is open for comment. Please contact the organizers listed below if you'd like to propose a topic. Typically in the absence of anything formal we poll attendees for topics, and discuss tactical work. 14 | 15 | ## slack 16 | 17 | [#sig-testing](https://kubernetes.slack.com/messages/sig-testing/) on kubernetes.slack.com 18 | Signup for access at http://slack.kubernetes.io/ 19 | 20 | ## github 21 | 22 | - [our github team: @kubernetes/sig-testing](https://github.com/orgs/kubernetes/teams/sig-testing) 23 | - [issues mentioning @kubernetes/sig-testing](https://github.com/issues?q=is%3Aopen+team%3Akubernetes%2Fsig-testing) 24 | 25 | We use the @kubernetes/sig-testing team to notify SIG members of particular issues or PR's of interest. If you would like to be added to this team, please contact the organizers listed below. 26 | 27 | ## google group 28 | 29 | https://groups.google.com/forum/#!forum/kubernetes-sig-testing (FWIW this doesn't see a lot of traffic, come find us in slack!) 30 | 31 | ## organizers 32 | 33 | - [Aaron Crickenberger, Samsung SDS](https://github.com/spiffxp), email: spiffxp@gmail.com 34 | - [Jeff Grafton, Google](https://github.com/ixdy), email: jgrafton@google.com 35 | -------------------------------------------------------------------------------- /contributors/devel/logging.md: -------------------------------------------------------------------------------- 1 | ## Logging Conventions 2 | 3 | The following conventions for the glog levels to use. 4 | [glog](http://godoc.org/github.com/golang/glog) is globally preferred to 5 | [log](http://golang.org/pkg/log/) for better runtime control. 6 | 7 | * glog.Errorf() - Always an error 8 | 9 | * glog.Warningf() - Something unexpected, but probably not an error 10 | 11 | * glog.Infof() has multiple levels: 12 | * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator 13 | * Programmer errors 14 | * Logging extra info about a panic 15 | * CLI argument handling 16 | * glog.V(1) - A reasonable default log level if you don't want verbosity. 17 | * Information about config (listening on X, watching Y) 18 | * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) 19 | * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. 20 | * Logging HTTP requests and their exit code 21 | * System state changing (killing pod) 22 | * Controller state change events (starting pods) 23 | * Scheduler log messages 24 | * glog.V(3) - Extended information about changes 25 | * More info about system state changes 26 | * glog.V(4) - Debug level verbosity (for now) 27 | * Logging in particularly thorny parts of code where you may want to come back later and check it 28 | 29 | As per the comments, the practical default level is V(2). Developers and QE 30 | environments may wish to run at V(3) or V(4). If you wish to change the log 31 | level, you can pass in `-v=X` where X is the desired maximum level to log. 32 | 33 | 34 | 35 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() 36 | 37 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-06-08.md: -------------------------------------------------------------------------------- 1 | # June 8, 2016 2 | 3 | - Intro by Michelle Noorali 4 | - Adnan Abdulhussein, Stacksmith lead at Bitnami, did a demo of Stacksmith 5 | - In the container world, updates to your application's stack or environment are rolled out by bringing down outdated containers and replacing them with an updated container image. Tools like Docker and Kubernetes make it incredibly easy to do this, however, knowing when your stack is outdated or vulnerable and starting the upgrade process is still a manual step. Stacksmith is a service that aims to solve this by maintaining your base Dockerfiles and proactively keeping them up-to-date and secure. This demo walked through how you can use Stacksmith with your application on GitHub to provide continuous delivery of your application container images. 6 | - Adnan is available as @prydonius on the Kubernetes slack as well as on [twitter](https://twitter.com/prydonius) for questions and feedback. 7 | - Feel free to leave feedback on the [Stacksmith](https://stacksmith.bitnami.com/) feedback tab. 8 | - Matt Farina gave an update on the SIG-Apps survey. 9 | - You can find a list of the current questions [here](https://docs.google.com/spreadsheets/d/1d4P_-lNGzw4jS9T4iizQBOtWT4O6e5yjDUygINigJnA/edit#gid=0) in a view only state. 10 | - If you are interested in submitting questions or being a part of the team that collects/analyzes the data, please reach out to @mattfarina on the Kubernetes slack channel. 11 | - Michelle Noorali gave an update on where you can find information and examples on PetSets. 12 | - Here are some links provided by Pranshanth from Google. 13 | - [github issue](https://github.com/kubernetes/kubernetes/issues/260#issuecomment-220395798) 14 | - [example pets](https://github.com/kubernetes/contrib/tree/master/pets) 15 | - Feel free to get your hands dirty. We will be discussing the provided examples in the upcoming weeks. 16 | 17 | Watch the [recording](https://youtu.be/wXZAXemhGb0). 18 | -------------------------------------------------------------------------------- /contributors/devel/getting-builds.md: -------------------------------------------------------------------------------- 1 | # Getting Kubernetes Builds 2 | 3 | You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) 4 | to get a build or to use as a reference on how to get the most recent builds 5 | with curl. With `get-build.sh` you can grab the most recent stable build, the 6 | most recent release candidate, or the most recent build to pass our ci and gce 7 | e2e tests (essentially a nightly build). 8 | 9 | Run `./hack/get-build.sh -h` for its usage. 10 | 11 | To get a build at a specific version (v1.1.1) use: 12 | 13 | ```console 14 | ./hack/get-build.sh v1.1.1 15 | ``` 16 | 17 | To get the latest stable release: 18 | 19 | ```console 20 | ./hack/get-build.sh release/stable 21 | ``` 22 | 23 | Use the "-v" option to print the version number of a build without retrieving 24 | it. For example, the following prints the version number for the latest ci 25 | build: 26 | 27 | ```console 28 | ./hack/get-build.sh -v ci/latest 29 | ``` 30 | 31 | You can also use the gsutil tool to explore the Google Cloud Storage release 32 | buckets. Here are some examples: 33 | 34 | ```sh 35 | gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number 36 | gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e 37 | gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release 38 | gsutil ls gs://kubernetes-release/release # list all official releases and rcs 39 | ``` 40 | 41 | ## Install `gsutil` 42 | 43 | Example installation: 44 | 45 | ```console 46 | $ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src 47 | $ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil 48 | ``` 49 | 50 | 51 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/getting-builds.md?pixel)]() 52 | 53 | -------------------------------------------------------------------------------- /contributors/devel/go-code.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Go Tools and Tips 2 | 3 | Kubernetes is one of the largest open source Go projects, so good tooling a solid understanding of 4 | Go is critical to Kubernetes development. This document provides a collection of resources, tools 5 | and tips that our developers have found useful. 6 | 7 | ## Recommended Reading 8 | 9 | - [Kubernetes Go development environment](development.md#go-development-environment) 10 | - [The Go Spec](https://golang.org/ref/spec) - The Go Programming Language 11 | Specification. 12 | - [Go Tour](https://tour.golang.org/welcome/2) - Official Go tutorial. 13 | - [Effective Go](https://golang.org/doc/effective_go.html) - A good collection of Go advice. 14 | - [Kubernetes Code conventions](coding-conventions.md) - Style guide for Kubernetes code. 15 | - [Three Go Landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f) - Surprising behavior in the Go language. These have caused real bugs! 16 | 17 | ## Recommended Tools 18 | 19 | - [godep](https://github.com/tools/godep) - Used for Kubernetes dependency management. See also [Kubernetes godep and dependency management](development.md#godep-and-dependency-management) 20 | - [Go Version Manager](https://github.com/moovweb/gvm) - A handy tool for managing Go versions. 21 | - [godepq](https://github.com/google/godepq) - A tool for analyzing go import trees. 22 | 23 | ## Go Tips 24 | 25 | - [Godoc bookmarklet](https://gist.github.com/timstclair/c891fb8aeb24d663026371d91dcdb3fc) - navigate from a github page to the corresponding godoc page. 26 | - Consider making a separate Go tree for each project, which can make overlapping dependency management much easier. Remember to set the `$GOPATH` correctly! Consider [scripting](https://gist.github.com/timstclair/17ca792a20e0d83b06dddef7d77b1ea0) this. 27 | - Emacs users - setup [go-mode](https://github.com/dominikh/go-mode.el) 28 | 29 | 30 | 31 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/go-code.md?pixel)]() 32 | 33 | -------------------------------------------------------------------------------- /sig-apps/minutes/2016-07-27.md: -------------------------------------------------------------------------------- 1 | # July 27, 2016 2 | 3 | * Intro by Matt 4 | * [Gerred Dillon](https://twitter.com/justicefries) presented a demo of [Deis](https://deis.com/) including features in the 2.2 release. 5 | * To turn an application into a container it uses Heroku buildpacks. 6 | * [slugbuilder](https://github.com/deis/slugbuilder) converts an application and buildpack into a container and puts the image into a private registry. Pod manifest, service manifest, etc are also created. 7 | * Q: Is slugbuilder a separate microservice others can use? 8 | A: Yes. It's a plugin to [builder](https://github.com/deis/builder) that can be used with other builders as well. For example, there is some way to incorporate an object store for storage of images. 9 | * Q: How does Deis use namespaces? 10 | A: There is a separate one for each application. Then other tools can deploy supporting services to the same namespace. 11 | * Q: What languages are used in Deis? 12 | A: Rails, Phoenix, Django, some PHP, and node are examples of apps today. There are currently not many Go applications. 13 | * Q: Will Deis add back-end services? 14 | A: Can't really say. They are getting plenty of requests for them. This may be a place helm will step in and be useful. 15 | * Deis 2.2 adds the ability to use Deployments which was showcased. This is optional and not the default right now. 16 | * Matt presented the results of the App Operator survey. 17 | * The raw details are available to [view or download](https://docs.google.com/spreadsheets/d/15SUL7QTpR4Flrp5eJ5TR8A5ZAFwbchfX2QL4MEoJFQ8/edit?usp=sharing). 18 | * A roll-up of some interesting points is [available]( 19 | https://docs.google.com/presentation/d/13votPnx9xKVBoQbLHz78I9EWFCosoioFqjTFeb4vB6M/edit?usp=sharing 20 | ). 21 | * The use of Kubernetes secrets was discussed. 47% of respondents use secrets. Secrets current storage mechanism is unencrypted at rest. This is a known limitation and we are waiting for some in the community to step in and provide support for systems like Hashicorp Vault or hardware security modules where appropriate. 22 | 23 | _Note, due to technical difficulties this meeting was not recorded._ 24 | -------------------------------------------------------------------------------- /community/developer-summit-2016/KubDevSummitVoting.md: -------------------------------------------------------------------------------- 1 | ###Kubernetes Developer's Summit Discussion Topics Voting 2 | ### 3 | A voting poll for discussion topic proposals has been created, and the link to the poll can be found [here][poll]. 4 | 5 | The poll will close on 10/07/16 at 23:59:59 PDT. 6 | 7 | #### How Does it Work? 8 | #### 9 | The voting uses the Condorcet method, which relies on relative rankings to pick winners. You can read more about the Condorcet method and the voting service we're using on the [CIVS website][civs]. 10 | 11 | There are 27 topics to choose from, and you will rank them from 1 (favorite) to 27 (least favorite). You can also mark "no opinion" on topics that you don't wish to include in the ranking. 12 | 13 | The poll on CIVS has just the topic titles for ease of viewing. For topic descriptions, please see this [spreadsheet][topics]. The topic order on the voting service should mirror the order on the spreadsheet. 14 | 15 | You will note the message saying "*Only the 15 favorite choices will win the poll*". CIVS requires a number be selected for winners, and we have arbitrarily chosen 15. The final schedule may include more or less than 15 of the submitted topics. 16 | 17 | ##### A Small Request 18 | ##### 19 | 20 | In order to make the poll accessible via URL, it has to be made "public". This means than any unique IP address can vote, which can be easily exploited for multiple votes. 21 | 22 | We fully expect the community to behave with sportsmanship and only vote once, and as such we almost didn't bring this concern up to begin with. However, we have chosen to explicitly address it, in order to reiterate the importance of everyone's voice receiving equal weight in a community-driven event. 23 | 24 | #### After the Poll 25 | #### 26 | A schedule will be made from the winning topics with 27 | some editorial license, and the schedule will be announced to the group 28 | at least a week before the event. 29 | 30 | [//]: # (Reference Links) 31 | [civs]: 32 | [poll]: 33 | [topics]: -------------------------------------------------------------------------------- /community/Kubernetes_1st_Bday.md: -------------------------------------------------------------------------------- 1 | # Kubernetes 1st Birthday Announcement 2 | 3 | Dear Kubernauts, 4 | 5 | Happy birthday! The Kubernetes community is celebrating the first anniversary for the v1.0 release on July 21st, 2016. We are reaching out to you as the chief point(s) of contact for your local Kubernetes meetup to encourage you to throw a Kubernetes birthday party! 6 | 7 | This is very exciting for the Kubernetes community, and we are happy to support your party planning efforts. We have swag we can provide (including awesome Kubernetes party hats!) and we will do our best to support your other endeavors. 8 | 9 | Since the actual anniversary is on July 21st, that date will likely be receiving the majority of the press and social media attention. But, by no means do you need to force your meetup to happen on July 21st; any time in late July or early August works great! We’re just hoping to get more people excited about Kubernetes and get users and developers meeting across the world. 10 | 11 | You may additionally be interested in joining the [Kubernetes Meetup leads mailing list](https://groups.google.com/forum/#!members/kubernetes-meetup-leads) where you can meet other organizers and swap tips. The [CNCF](https://cncf.io/community) is also working with Kubernetes meetups to offer support for things like Meetup.com fees and events, including financial support up to $250 for some official meetups. For more information on that effort, you can contact [Chris Aniszczyk](mailto:caniszczyk@linuxfoundation.org) and [Brett Preston](mailto:bpreston@linuxfoundation.org). 12 | 13 | Please contact us via [this Google form](https://docs.google.com/forms/d/1B17ckkz-FYEFhkQ2ZD8PZBH1ZnSCdpS366lB3a6MAtE/viewform) with any questions / requests / suggestions for your meetup. Alternatively, you can reach out to us directly (czahedi@google.com and sarahnovotny@google.com). We hope to hear from you soon! 14 | 15 | Lastly, if you know other people working with Kubernetes, please send this invitation their way! It could be a great chance for them to plug into a Meetup.com group or the mailing list above. 16 | 17 | And lest we forget, I’m Cameron Zahedi. I’ll be working with Sarah on her Kubernetes community endeavors, and I’m happy to help (or find the right person to help!) wherever possible. 18 | 19 | Cheers, 20 | 21 | Cameron and Sarah (on behalf of the Kubernetes Community.) 22 | 23 | 24 | -------------------------------------------------------------------------------- /contributors/devel/on-call-rotations.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes On-Call Rotations 2 | 3 | ### Kubernetes "first responder" rotations 4 | 5 | Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, 6 | etc. So much traffic that it's becoming impossible to keep up with it all! This 7 | is a fantastic problem to have. In order to be sure that SOMEONE, but not 8 | EVERYONE on the team is paying attention to public traffic, we have instituted 9 | two "first responder" rotations, listed below. Please read this page before 10 | proceeding to the pages linked below, which are specific to each rotation. 11 | 12 | Please also read our [notes on OSS collaboration](collab.md), particularly the 13 | bits about hours. Specifically, each rotation is expected to be active primarily 14 | during work hours, less so off hours. 15 | 16 | During regular workday work hours of your shift, your primary responsibility is 17 | to monitor the traffic sources specific to your rotation. You can check traffic 18 | in the evenings if you feel so inclined, but it is not expected to be as highly 19 | focused as work hours. For weekends, you should check traffic very occasionally 20 | (e.g. once or twice a day). Again, it is not expected to be as highly focused as 21 | workdays. It is assumed that over time, everyone will get weekday and weekend 22 | shifts, so the workload will balance out. 23 | 24 | If you can not serve your shift, and you know this ahead of time, it is your 25 | responsibility to find someone to cover and to change the rotation. If you have 26 | an emergency, your responsibilities fall on the primary of the other rotation, 27 | who acts as your secondary. If you need help to cover all of the tasks, partners 28 | with oncall rotations (e.g., 29 | [Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)). 30 | 31 | If you are not on duty you DO NOT need to do these things. You are free to focus 32 | on "real work". 33 | 34 | Note that Kubernetes will occasionally enter code slush/freeze, prior to 35 | milestones. When it does, there might be changes in the instructions (assigning 36 | milestones, for instance). 37 | 38 | * [Github and Build Cop Rotation](on-call-build-cop.md) 39 | * [User Support Rotation](on-call-user-support.md) 40 | 41 | 42 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-rotations.md?pixel)]() 43 | 44 | -------------------------------------------------------------------------------- /project-managers/README.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes PM 2 | 3 | kubernetes-pm is a github team that will help to manage and maintain the project in ways other than just writing code. Specifically, members of the kubernetes-pm team are 4 | expected to: 5 | 6 | * Add/change routing labels to issues ([sig/, area/](https://github.com/kubernetes/kubernetes/wiki)) 7 | * Apply release-note labels to PRs (until that is automated or eliminated) 8 | * Set the milestone on a PR or issue 9 | * Assign issues to the correct people for immediate triage/work 10 | * Assign PRs to be reviewed by the correct people 11 | * Close duplicate, stale, misplaced issues 12 | * Close support issues, with a redirect to stackoverflow 13 | 14 | They should **NOT**: 15 | 16 | * Manually apply merge labels to PRs (lgtm, approved) or press merge button 17 | * Apply priority labels to PRs 18 | * Apply cherrypick labels to PRs 19 | * Edit text of other people's PRs and issues, including deleting comments 20 | * Modify anyone else's release note 21 | * Create, edit, delete labels 22 | * Create, edit, close, delete milestones 23 | * Create, edit, delete releases 24 | * Create, edit, delete PR statuses 25 | * Edit wiki 26 | 27 | Abuse and misuse will result in prompt removal from the Project Managers group and loss of privileges. 28 | 29 | ## Joining the group 30 | 31 | The following is the provisional process for adding people to the group: 32 | 33 | 1. Join kubernetes-wg-contribex at googlegroups.com. Send a message to that group that states: 34 | 1. Your github id. 35 | 1. What you are working on with respect to the project. 36 | 1. Any official management or leadership responsibilities you may have (e.g., manager of a team that includes the following contributors: ….). 37 | 1. What you plan to do with the powers, including specific labels, SIGs, system components, github teams, etc. 38 | 1. How long you think you need the powers. 39 | 1. Any questions you have about appropriate application of the powers (e.g., policy for assigning priorities to issues, policy for assigning and unassigning issues, which parts of the system are covered by which teams). 40 | 41 | 42 | The Contributor Experience working-group leads or Brian Grant (@bgrant0607) will approve or deny your request. 43 | 44 | Some prerequisites may need to be completed prior to being added to the github team, such as clarifying relevant policies in our contributor documentation. 45 | 46 | Thanks for helping to manage the project! 47 | -------------------------------------------------------------------------------- /contributors/devel/collab.md: -------------------------------------------------------------------------------- 1 | # On Collaborative Development 2 | 3 | Kubernetes is open source, but many of the people working on it do so as their 4 | day job. In order to avoid forcing people to be "at work" effectively 24/7, we 5 | want to establish some semi-formal protocols around development. Hopefully these 6 | rules make things go more smoothly. If you find that this is not the case, 7 | please complain loudly. 8 | 9 | ## Patches welcome 10 | 11 | First and foremost: as a potential contributor, your changes and ideas are 12 | welcome at any hour of the day or night, weekdays, weekends, and holidays. 13 | Please do not ever hesitate to ask a question or send a PR. 14 | 15 | ## Code reviews 16 | 17 | All changes must be code reviewed. For non-maintainers this is obvious, since 18 | you can't commit anyway. But even for maintainers, we want all changes to get at 19 | least one review, preferably (for non-trivial changes obligatorily) from someone 20 | who knows the areas the change touches. For non-trivial changes we may want two 21 | reviewers. The primary reviewer will make this decision and nominate a second 22 | reviewer, if needed. Except for trivial changes, PRs should not be committed 23 | until relevant parties (e.g. owners of the subsystem affected by the PR) have 24 | had a reasonable chance to look at PR in their local business hours. 25 | 26 | Most PRs will find reviewers organically. If a maintainer intends to be the 27 | primary reviewer of a PR they should set themselves as the assignee on GitHub 28 | and say so in a reply to the PR. Only the primary reviewer of a change should 29 | actually do the merge, except in rare cases (e.g. they are unavailable in a 30 | reasonable timeframe). 31 | 32 | If a PR has gone 2 work days without an owner emerging, please poke the PR 33 | thread and ask for a reviewer to be assigned. 34 | 35 | Except for rare cases, such as trivial changes (e.g. typos, comments) or 36 | emergencies (e.g. broken builds), maintainers should not merge their own 37 | changes. 38 | 39 | Expect reviewers to request that you avoid [common go style 40 | mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. 41 | 42 | ## Assigned reviews 43 | 44 | Maintainers can assign reviews to other maintainers, when appropriate. The 45 | assignee becomes the shepherd for that PR and is responsible for merging the PR 46 | once they are satisfied with it or else closing it. The assignee might request 47 | reviews from non-maintainers. 48 | 49 | 50 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() 51 | 52 | -------------------------------------------------------------------------------- /community/developer-summit-2016/statefulset_notes.md: -------------------------------------------------------------------------------- 1 | # StatefulSets Session 2 | 3 | Topics to talk about: 4 | * local volumes 5 | * requests for the storage sig 6 | * reclaim policies 7 | * Filtering APIs for scheduler 8 | * Data locality 9 | * State of the StateFulSet 10 | * Portable IPs 11 | * Sticky Regions 12 | * Renaming Pods 13 | 14 | ## State of the StatefulSet 15 | 16 | 1.5 will come out soon, we'll go beta for StatefulSets in that one. One of the questions is what are the next steps for Statefulsets? One thing is a long beta, so that we know we can trust statefulsets and they're safe. 17 | 18 | Missed some discussion here about force deletion. 19 | 20 | The pod isn't done until the kubelet says it's done. The issue is what happens when we have a netsplit, because the master doesn't know what's happening with the pods. In the future we'll maybe add some kind of fencer to make sure that they can't rejoin. Fencing is probably a topic for the Bare-Metal Sig. 21 | 22 | Are we going to sacrifice availability for consistency? We won't explicitly take actions which aren't safe automatically. Question: should the kubelet delete automatically if it can't contact the master? No, because it can't contact the master to say it did it. 23 | 24 | When are we going to finish the rename from PetSet to StatefulSet? The PR is merged for renaming, but the documentation changes aren't. 25 | 26 | Storage provisioning? The assumption is that you will be able to preallocate a lot of storage for dynamic storage so that you can stamp out PVCs. If dynamic volumes aren't simple to use this is a lot more annoying. 27 | 28 | Building initial quorums issue? 29 | 30 | It would be great to have a developer storage class which ties back to a fake NFS. For testing and dev. The idea behind local volumes is that it should be easy to create throwaway storage on local disk. So that you can write things which run on every kube cluster. 31 | 32 | Will there be a API for the application? To communicate members joining and leaving. Answer today is that's what the KubeAPI is for. 33 | 34 | The hard problem is configchange. You can't do config change unless you bootstrap it correctly. If kube is changing things under me I can't maintant quorum (as an app). This happens when expanding the set of nodes. You need to figure out who's in and who's out. 35 | 36 | Where does the glue software which relates the statefulset to the application? But different applications handle things like consensus and quorum very differently. What about notifying the service that you're available for traffic. Example for this with etcd with readiness vs. membership service. You can have two states, one where the node is ready, and one where the application is ready. Readiness vs. liveness check could differentiate? 37 | 38 | Is rapid spin-up a real issue? Nobody thinks so, 39 | -------------------------------------------------------------------------------- /contributors/devel/instrumentation.md: -------------------------------------------------------------------------------- 1 | ## Instrumenting Kubernetes with a new metric 2 | 3 | The following is a step-by-step guide for adding a new metric to the Kubernetes 4 | code base. 5 | 6 | We use the Prometheus monitoring system's golang client library for 7 | instrumenting our code. Once you've picked out a file that you want to add a 8 | metric to, you should: 9 | 10 | 1. Import "github.com/prometheus/client_golang/prometheus". 11 | 12 | 2. Create a top-level var to define the metric. For this, you have to: 13 | 14 | 1. Pick the type of metric. Use a Gauge for things you want to set to a 15 | particular value, a Counter for things you want to increment, or a Histogram or 16 | Summary for histograms/distributions of values (typically for latency). 17 | Histograms are better if you're going to aggregate the values across jobs, while 18 | summaries are better if you just want the job to give you a useful summary of 19 | the values. 20 | 2. Give the metric a name and description. 21 | 3. Pick whether you want to distinguish different categories of things using 22 | labels on the metric. If so, add "Vec" to the name of the type of metric you 23 | want and add a slice of the label names to the definition. 24 | 25 | https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 26 | https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 27 | 28 | 3. Register the metric so that prometheus will know to export it. 29 | 30 | https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 31 | https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 32 | 33 | 4. Use the metric by calling the appropriate method for your metric type (Set, 34 | Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), 35 | first calling WithLabelValues if your metric has any labels 36 | 37 | https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 38 | https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 39 | 40 | 41 | These are the metric type definitions if you're curious to learn about them or 42 | need more information: 43 | 44 | https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go 45 | https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go 46 | https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go 47 | https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go 48 | 49 | 50 | 51 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() 52 | 53 | -------------------------------------------------------------------------------- /contributors/devel/profiling.md: -------------------------------------------------------------------------------- 1 | # Profiling Kubernetes 2 | 3 | This document explain how to plug in profiler and how to profile Kubernetes services. 4 | 5 | ## Profiling library 6 | 7 | Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. 8 | 9 | ## Adding profiling to services to APIserver. 10 | 11 | TL;DR: Add lines: 12 | 13 | ```go 14 | m.mux.HandleFunc("/debug/pprof/", pprof.Index) 15 | m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) 16 | m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) 17 | ``` 18 | 19 | to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. 20 | 21 | In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. 22 | 23 | ## Connecting to the profiler 24 | 25 | Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: 26 | 27 | ```sh 28 | ssh kubernetes_master -L:localhost:8080 29 | ``` 30 | 31 | or analogous one for you Cloud provider. Afterwards you can e.g. run 32 | 33 | ```sh 34 | go tool pprof http://localhost:/debug/pprof/profile 35 | ``` 36 | 37 | to get 30 sec. CPU profile. 38 | 39 | ## Contention profiling 40 | 41 | To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. 42 | 43 | 44 | 45 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() 46 | 47 | -------------------------------------------------------------------------------- /contributors/devel/cherry-picks.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | This document explains cherry picks are managed on release branches within the 4 | Kubernetes projects. Patches are either applied in batches or individually 5 | depending on the point in the release cycle. 6 | 7 | ## Propose a Cherry Pick 8 | 9 | 1. Cherrypicks are [managed with labels and milestones] 10 | (pull-requests.md#release-notes) 11 | 1. To get a PR merged to the release branch, first ensure the following labels 12 | are on the original **master** branch PR: 13 | * An appropriate milestone (e.g. v1.3) 14 | * The `cherrypick-candidate` label 15 | 1. If `release-note-none` is set on the master PR, the cherrypick PR will need 16 | to set the same label to confirm that no release note is needed. 17 | 1. `release-note` labeled PRs generate a release note using the PR title by 18 | default OR the release-note block in the PR template if filled in. 19 | * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more 20 | details. 21 | * PR titles and body comments are mutable and can be modified at any time 22 | prior to the release to reflect a release note friendly message. 23 | 24 | ### How do cherrypick-candidates make it to the release branch? 25 | 26 | 1. **BATCHING:** After a branch is first created and before the X.Y.0 release 27 | * Branch owners review the list of `cherrypick-candidate` labeled PRs. 28 | * PRs batched up and merged to the release branch get a `cherrypick-approved` 29 | label and lose the `cherrypick-candidate` label. 30 | * PRs that won't be merged to the release branch, lose the 31 | `cherrypick-candidate` label. 32 | 33 | 1. **INDIVIDUAL CHERRYPICKS:** After the first X.Y.0 on a branch 34 | * Run the cherry pick script. This example applies a master branch PR #98765 35 | to the remote branch `upstream/release-3.14`: 36 | `hack/cherry_pick_pull.sh upstream/release-3.14 98765` 37 | * Your cherrypick PR (targeted to the branch) will immediately get the 38 | `do-not-merge` label. The branch owner will triage PRs targeted to 39 | the branch and label the ones to be merged by applying the `lgtm` 40 | label. 41 | 42 | There is an [issue](https://github.com/kubernetes/kubernetes/issues/23347) open 43 | tracking the tool to automate the batching procedure. 44 | 45 | ## Cherry Pick Review 46 | 47 | Cherry pick pull requests are reviewed differently than normal pull requests. In 48 | particular, they may be self-merged by the release branch owner without fanfare, 49 | in the case the release branch owner knows the cherry pick was already 50 | requested - this should not be the norm, but it may happen. 51 | 52 | ## Searching for Cherry Picks 53 | 54 | See the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue) for 55 | status of PRs labeled as `cherrypick-candidate`. 56 | 57 | [Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is 58 | considered implicit for all code within cherry-pick pull requests, ***unless 59 | there is a large conflict***. 60 | 61 | 62 | 63 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/cherry-picks.md?pixel)]() 64 | 65 | -------------------------------------------------------------------------------- /contributors/design-proposals/runtimeconfig.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | Proposes adding a `--feature-config` to core kube system components: 4 | apiserver , scheduler, controller-manager, kube-proxy, and selected addons. 5 | This flag will be used to enable/disable alpha features on a per-component basis. 6 | 7 | ## Motivation 8 | 9 | Motivation is enabling/disabling features that are not tied to 10 | an API group. API groups can be selectively enabled/disabled in the 11 | apiserver via existing `--runtime-config` flag on apiserver, but there is 12 | currently no mechanism to toggle alpha features that are controlled by 13 | e.g. annotations. This means the burden of controlling whether such 14 | features are enabled in a particular cluster is on feature implementors; 15 | they must either define some ad hoc mechanism for toggling (e.g. flag 16 | on component binary) or else toggle the feature on/off at compile time. 17 | 18 | By adding a`--feature-config` to all kube-system components, alpha features 19 | can be toggled on a per-component basis by passing `enableAlphaFeature=true|false` 20 | to `--feature-config` for each component that the feature touches. 21 | 22 | ## Design 23 | 24 | The following components will all get a `--feature-config` flag, 25 | which loads a `config.ConfigurationMap`: 26 | 27 | - kube-apiserver 28 | - kube-scheduler 29 | - kube-controller-manager 30 | - kube-proxy 31 | - kube-dns 32 | 33 | (Note kubelet is omitted, it's dynamic config story is being addressed 34 | by [#29459](https://issues.k8s.io/29459)). Alpha features that are not accessed via an alpha API 35 | group should define an `enableFeatureName` flag and use it to toggle 36 | activation of the feature in each system component that the feature 37 | uses. 38 | 39 | ## Suggested conventions 40 | 41 | This proposal only covers adding a mechanism to toggle features in 42 | system components. Implementation details will still depend on the alpha 43 | feature's owner(s). The following are suggested conventions: 44 | 45 | - Naming for feature config entries should follow the pattern 46 | "enable=true". 47 | - Features that touch multiple components should reserve the same key 48 | in each component to toggle on/off. 49 | - Alpha features should be disabled by default. Beta features may 50 | be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions 51 | for more detailed guidance on alpha vs. beta. 52 | 53 | ## Upgrade support 54 | 55 | As the primary motivation for cluster config is toggling alpha 56 | features, upgrade support is not in scope. Enabling or disabling 57 | a feature is necessarily a breaking change, so config should 58 | not be altered in a running cluster. 59 | 60 | ## Future work 61 | 62 | 1. The eventual plan is for component config to be managed by versioned 63 | APIs and not flags ([#12245](https://issues.k8s.io/12245)). When that is added, toggling of features 64 | could be handled by versioned component config and the component flags 65 | deprecated. 66 | 67 | 68 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/runtimeconfig.md?pixel)]() 69 | 70 | -------------------------------------------------------------------------------- /contributors/devel/generating-clientset.md: -------------------------------------------------------------------------------- 1 | # Generation and release cycle of clientset 2 | 3 | Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets. 4 | 5 | ## Using client-gen 6 | 7 | The workflow includes three steps: 8 | 9 | **1.** Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well. 10 | 11 | **2a.** If you are developing in the k8s.io/kubernetes repository, you just need to run hack/update-codegen.sh. 12 | 13 | **2b.** If you are running client-gen outside of k8s.io/kubernetes, you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, to generated a clientset named "my_release" including clients for api/v1 objects and extensions/v1beta1 objects, you need to run: 14 | 15 | ``` 16 | $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release" 17 | ``` 18 | 19 | **3.** ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/clientset/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen. 20 | 21 | ## Output of client-gen 22 | 23 | - clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument. 24 | 25 | - Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/` 26 | 27 | ## Released clientsets 28 | 29 | If you are contributing code to k8s.io/kubernetes, try to use the generated clientset [here](../../pkg/client/clientset_generated/clientset/). 30 | 31 | If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go). 32 | 33 | We are migrating k8s.io/kubernetes to use client-go as well, see issue [#35159](https://github.com/kubernetes/kubernetes/issues/35159). 34 | 35 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() 36 | 37 | 38 | 39 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/generating-clientset.md?pixel)]() 40 | 41 | -------------------------------------------------------------------------------- /contributors/devel/issues.md: -------------------------------------------------------------------------------- 1 | ## GitHub Issues for the Kubernetes Project 2 | 3 | A quick overview of how we will review and prioritize incoming issues at 4 | https://github.com/kubernetes/kubernetes/issues 5 | 6 | ### Priorities 7 | 8 | We use GitHub issue labels for prioritization. The absence of a priority label 9 | means the bug has not been reviewed and prioritized yet. 10 | 11 | We try to apply these priority labels consistently across the entire project, 12 | but if you notice an issue that you believe to be incorrectly prioritized, 13 | please do let us know and we will evaluate your counter-proposal. 14 | 15 | - **priority/critical-urgent**: Must be actively worked on as someone's top priority right 16 | now. Stuff is burning. If it's not being actively worked on, someone is expected 17 | to drop what they're doing immediately to work on it. Team leaders are 18 | responsible for making sure that all P0's in their area are being actively 19 | worked on. Examples include user-visible bugs in core features, broken builds or 20 | tests and critical security issues. 21 | 22 | - **priority/failing-test**: Automatically filed frequently failing test. Needs to be investigated. 23 | 24 | - **priority/important-soon**: Must be staffed and worked on either currently, or very soon, 25 | ideally in time for the next release. 26 | 27 | - **priority/important-longterm**: Important over the long term, but may not be currently 28 | staffed and/or may require multiple releases to complete. 29 | 30 | - **priority/backlog**: There appears to be general agreement that this would be good 31 | to have, but we may not have anyone available to work on it right now or in the 32 | immediate future. Community contributions would be most welcome in the mean time 33 | (although it might take a while to get them reviewed if reviewers are fully 34 | occupied with higher priority issues, for example immediately before a release). 35 | 36 | - **priority/awaiting-more-evidence**: Possibly useful, but not yet enough support to actually get 37 | it done. These are mostly place-holders for potentially good ideas, so that they 38 | don't get completely forgotten, and can be referenced/deduped every time they 39 | come up. 40 | 41 | ### Milestones 42 | 43 | We additionally use milestones, based on minor version, for determining if a bug 44 | should be fixed for the next release. These milestones will be especially 45 | scrutinized as we get to the weeks just before a release. We can release a new 46 | version of Kubernetes once they are empty. We will have two milestones per minor 47 | release. 48 | 49 | - **vX.Y**: The list of bugs that will be merged for that milestone once ready. 50 | 51 | - **vX.Y-candidate**: The list of bug that we might merge for that milestone. A 52 | bug shouldn't be in this milestone for more than a day or two towards the end of 53 | a milestone. It should be triaged either into vX.Y, or moved out of the release 54 | milestones. 55 | 56 | The above priority scheme still applies. P0 and P1 issues are work we feel must 57 | get done before release. P2 and P3 issues are work we would merge into the 58 | release if it gets done, but we wouldn't block the release on it. A few days 59 | before release, we will probably move all P2 and P3 bugs out of that milestone 60 | in bulk. 61 | 62 | 63 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() 64 | 65 | -------------------------------------------------------------------------------- /contributors/design-proposals/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Design Overview 2 | 3 | Kubernetes is a system for managing containerized applications across multiple 4 | hosts, providing basic mechanisms for deployment, maintenance, and scaling of 5 | applications. 6 | 7 | Kubernetes establishes robust declarative primitives for maintaining the desired 8 | state requested by the user. We see these primitives as the main value added by 9 | Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and 10 | replicating containers require active controllers, not just imperative 11 | orchestration. 12 | 13 | Kubernetes is primarily targeted at applications composed of multiple 14 | containers, such as elastic, distributed micro-services. It is also designed to 15 | facilitate migration of non-containerized application stacks to Kubernetes. It 16 | therefore includes abstractions for grouping containers in both loosely coupled 17 | and tightly coupled formations, and provides ways for containers to find and 18 | communicate with each other in relatively familiar ways. 19 | 20 | Kubernetes enables users to ask a cluster to run a set of containers. The system 21 | automatically chooses hosts to run those containers on. While Kubernetes's 22 | scheduler is currently very simple, we expect it to grow in sophistication over 23 | time. Scheduling is a policy-rich, topology-aware, workload-specific function 24 | that significantly impacts availability, performance, and capacity. The 25 | scheduler needs to take into account individual and collective resource 26 | requirements, quality of service requirements, hardware/software/policy 27 | constraints, affinity and anti-affinity specifications, data locality, 28 | inter-workload interference, deadlines, and so on. Workload-specific 29 | requirements will be exposed through the API as necessary. 30 | 31 | Kubernetes is intended to run on a number of cloud providers, as well as on 32 | physical hosts. 33 | 34 | A single Kubernetes cluster is not intended to span multiple availability zones. 35 | Instead, we recommend building a higher-level layer to replicate complete 36 | deployments of highly available applications across multiple zones (see 37 | [the multi-cluster doc](../admin/multi-cluster.md) and [cluster federation proposal](../proposals/federation.md) 38 | for more details). 39 | 40 | Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS 41 | platform and toolkit. Therefore, architecturally, we want Kubernetes to be built 42 | as a collection of pluggable components and layers, with the ability to use 43 | alternative schedulers, controllers, storage systems, and distribution 44 | mechanisms, and we're evolving its current code in that direction. Furthermore, 45 | we want others to be able to extend Kubernetes functionality, such as with 46 | higher-level PaaS functionality or multi-cluster layers, without modification of 47 | core Kubernetes source. Therefore, its API isn't just (or even necessarily 48 | mainly) targeted at end users, but at tool and extension developers. Its APIs 49 | are intended to serve as the foundation for an open ecosystem of tools, 50 | automation systems, and higher-level API layers. Consequently, there are no 51 | "internal" inter-component APIs. All APIs are visible and available, including 52 | the APIs used by the scheduler, the node controller, the replication-controller 53 | manager, Kubelet's API, etc. There's no glass to break -- in order to handle 54 | more complex use cases, one can just access the lower-level APIs in a fully 55 | transparent, composable manner. 56 | 57 | For more about the Kubernetes architecture, see [architecture](architecture.md). 58 | 59 | 60 | 61 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/README.md?pixel)]() 62 | 63 | -------------------------------------------------------------------------------- /contributors/design-proposals/scalability-testing.md: -------------------------------------------------------------------------------- 1 | 2 | ## Background 3 | 4 | We have a goal to be able to scale to 1000-node clusters by end of 2015. 5 | As a result, we need to be able to run some kind of regression tests and deliver 6 | a mechanism so that developers can test their changes with respect to performance. 7 | 8 | Ideally, we would like to run performance tests also on PRs - although it might 9 | be impossible to run them on every single PR, we may introduce a possibility for 10 | a reviewer to trigger them if the change has non obvious impact on the performance 11 | (something like "k8s-bot run scalability tests please" should be feasible). 12 | 13 | However, running performance tests on 1000-node clusters (or even bigger in the 14 | future is) is a non-starter. Thus, we need some more sophisticated infrastructure 15 | to simulate big clusters on relatively small number of machines and/or cores. 16 | 17 | This document describes two approaches to tackling this problem. 18 | Once we have a better understanding of their consequences, we may want to 19 | decide to drop one of them, but we are not yet in that position. 20 | 21 | 22 | ## Proposal 1 - Kubmark 23 | 24 | In this proposal we are focusing on scalability testing of master components. 25 | We do NOT focus on node-scalability - this issue should be handled separately. 26 | 27 | Since we do not focus on the node performance, we don't need real Kubelet nor 28 | KubeProxy - in fact we don't even need to start real containers. 29 | All we actually need is to have some Kubelet-like and KubeProxy-like components 30 | that will be simulating the load on apiserver that their real equivalents are 31 | generating (e.g. sending NodeStatus updated, watching for pods, watching for 32 | endpoints (KubeProxy), etc.). 33 | 34 | What needs to be done: 35 | 36 | 1. Determine what requests both KubeProxy and Kubelet are sending to apiserver. 37 | 2. Create a KubeletSim that is generating the same load on apiserver that the 38 | real Kubelet, but is not starting any containers. In the initial version we 39 | can assume that pods never die, so it is enough to just react on the state 40 | changes read from apiserver. 41 | TBD: Maybe we can reuse a real Kubelet for it by just injecting some "fake" 42 | interfaces to it? 43 | 3. Similarly create a KubeProxySim that is generating the same load on apiserver 44 | as a real KubeProxy. Again, since we are not planning to talk to those 45 | containers, it basically doesn't need to do anything apart from that. 46 | TBD: Maybe we can reuse a real KubeProxy for it by just injecting some "fake" 47 | interfaces to it? 48 | 4. Refactor kube-up/kube-down scripts (or create new ones) to allow starting 49 | a cluster with KubeletSim and KubeProxySim instead of real ones and put 50 | a bunch of them on a single machine. 51 | 5. Create a load generator for it (probably initially it would be enough to 52 | reuse tests that we use in gce-scalability suite). 53 | 54 | 55 | ## Proposal 2 - Oversubscribing 56 | 57 | The other method we are proposing is to oversubscribe the resource, 58 | or in essence enable a single node to look like many separate nodes even though 59 | they reside on a single host. This is a well established pattern in many different 60 | cluster managers (for more details see 61 | http://www.uscms.org/SoftwareComputing/Grid/WMS/glideinWMS/doc.prd/index.html ). 62 | There are a couple of different ways to accomplish this, but the most viable method 63 | is to run privileged kubelet pods under a hosts kubelet process. These pods then 64 | register back with the master via the introspective service using modified names 65 | as not to collide. 66 | 67 | Complications may currently exist around container tracking and ownership in docker. 68 | 69 | 70 | 71 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/scalability-testing.md?pixel)]() 72 | 73 | -------------------------------------------------------------------------------- /contributors/design-proposals/architecture.md: -------------------------------------------------------------------------------- 1 | # Kubernetes architecture 2 | 3 | A running Kubernetes cluster contains node agents (`kubelet`) and master 4 | components (APIs, scheduler, etc), on top of a distributed storage solution. 5 | This diagram shows our desired eventual state, though we're still working on a 6 | few things, like making `kubelet` itself (all our components, really) run within 7 | containers, and making the scheduler 100% pluggable. 8 | 9 | ![Architecture Diagram](architecture.png?raw=true "Architecture overview") 10 | 11 | ## The Kubernetes Node 12 | 13 | When looking at the architecture of the system, we'll break it down to services 14 | that run on the worker node and services that compose the cluster-level control 15 | plane. 16 | 17 | The Kubernetes node has the services necessary to run application containers and 18 | be managed from the master systems. 19 | 20 | Each node runs a container runtime (like Docker, rkt or Hyper). The container 21 | runtime is responsible for downloading images and running containers. 22 | 23 | ### `kubelet` 24 | 25 | The `kubelet` manages [pods](../user-guide/pods.md) and their containers, their 26 | images, their volumes, etc. 27 | 28 | ### `kube-proxy` 29 | 30 | Each node also runs a simple network proxy and load balancer (see the 31 | [services FAQ](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) for 32 | more details). This reflects `services` (see 33 | [the services doc](../user-guide/services.md) for more details) as defined in 34 | the Kubernetes API on each node and can do simple TCP and UDP stream forwarding 35 | (round robin) across a set of backends. 36 | 37 | Service endpoints are currently found via [DNS](../admin/dns.md) or through 38 | environment variables (both 39 | [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and 40 | Kubernetes `{FOO}_SERVICE_HOST` and `{FOO}_SERVICE_PORT` variables are 41 | supported). These variables resolve to ports managed by the service proxy. 42 | 43 | ## The Kubernetes Control Plane 44 | 45 | The Kubernetes control plane is split into a set of components. Currently they 46 | all run on a single _master_ node, but that is expected to change soon in order 47 | to support high-availability clusters. These components work together to provide 48 | a unified view of the cluster. 49 | 50 | ### `etcd` 51 | 52 | All persistent master state is stored in an instance of `etcd`. This provides a 53 | great way to store configuration data reliably. With `watch` support, 54 | coordinating components can be notified very quickly of changes. 55 | 56 | ### Kubernetes API Server 57 | 58 | The apiserver serves up the [Kubernetes API](../api.md). It is intended to be a 59 | CRUD-y server, with most/all business logic implemented in separate components 60 | or in plug-ins. It mainly processes REST operations, validates them, and updates 61 | the corresponding objects in `etcd` (and eventually other stores). 62 | 63 | ### Scheduler 64 | 65 | The scheduler binds unscheduled pods to nodes via the `/binding` API. The 66 | scheduler is pluggable, and we expect to support multiple cluster schedulers and 67 | even user-provided schedulers in the future. 68 | 69 | ### Kubernetes Controller Manager Server 70 | 71 | All other cluster-level functions are currently performed by the Controller 72 | Manager. For instance, `Endpoints` objects are created and updated by the 73 | endpoints controller, and nodes are discovered, managed, and monitored by the 74 | node controller. These could eventually be split into separate components to 75 | make them independently pluggable. 76 | 77 | The [`replicationcontroller`](../user-guide/replication-controller.md) is a 78 | mechanism that is layered on top of the simple [`pod`](../user-guide/pods.md) 79 | API. We eventually plan to port it to a generic plug-in mechanism, once one is 80 | implemented. 81 | 82 | 83 | 84 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]() 85 | 86 | -------------------------------------------------------------------------------- /contributors/devel/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Developer Guide 2 | 3 | The developer guide is for anyone wanting to either write code which directly accesses the 4 | Kubernetes API, or to contribute directly to the Kubernetes project. 5 | It assumes some familiarity with concepts in the [User Guide](http://kubernetes.io/docs/user-guide/) and the [Cluster Admin 6 | Guide](http://kubernetes.io/docs/admin/). 7 | 8 | 9 | ## The process of developing and contributing code to the Kubernetes project 10 | 11 | * **On Collaborative Development** ([collab.md](collab.md)): Info on pull requests and code reviews. 12 | 13 | * **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. 14 | 15 | * **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. 16 | 17 | * **Kubernetes On-Call Rotations** ([on-call-rotations.md](on-call-rotations.md)): Descriptions of on-call rotations for build and end-user support. 18 | 19 | * **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. 20 | 21 | * **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. 22 | 23 | * **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository. 24 | 25 | 26 | ## Setting up your dev environment, coding, and debugging 27 | 28 | * **Development Guide** ([development.md](development.md)): Setting up your development environment. 29 | 30 | * **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. 31 | Here's how to run your tests many times. 32 | 33 | * **Logging Conventions** ([logging.md](logging.md)): Glog levels. 34 | 35 | * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. 36 | 37 | * **Instrumenting Kubernetes with a new metric** 38 | ([instrumentation.md](instrumentation.md)): How to add a new metrics to the 39 | Kubernetes code base. 40 | 41 | * **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): 42 | Coding style advice for contributors. 43 | 44 | * **Document Conventions** ([how-to-doc.md](how-to-doc.md)) 45 | Document style advice for contributors. 46 | 47 | * **Running a cluster locally** ([running-locally.md](running-locally.md)): 48 | A fast and lightweight local cluster deployment for development. 49 | 50 | ## Developing against the Kubernetes API 51 | 52 | * The [REST API documentation](../api-reference/README.md) explains the REST 53 | API exposed by apiserver. 54 | 55 | * **Annotations** ([docs/user-guide/annotations.md](../user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects. 56 | Programs that automate Kubernetes objects may use annotations to store small amounts of their state. 57 | 58 | * **API Conventions** ([api-conventions.md](api-conventions.md)): 59 | Defining the verbs and resources used in the Kubernetes API. 60 | 61 | * **API Client Libraries** ([client-libraries.md](client-libraries.md)): 62 | A list of existing client libraries, both supported and user-contributed. 63 | 64 | 65 | ## Writing plugins 66 | 67 | * **Authentication Plugins** ([docs/admin/authentication.md](../admin/authentication.md)): 68 | The current and planned states of authentication tokens. 69 | 70 | * **Authorization Plugins** ([docs/admin/authorization.md](../admin/authorization.md)): 71 | Authorization applies to all HTTP requests on the main apiserver port. 72 | This doc explains the available authorization implementations. 73 | 74 | * **Admission Control Plugins** ([admission_control](../design/admission_control.md)) 75 | 76 | 77 | ## Building releases 78 | 79 | See the [kubernetes/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts. 80 | 81 | 82 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() 83 | 84 | -------------------------------------------------------------------------------- /contributors/design-proposals/admission_control.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Proposal - Admission Control 2 | 3 | **Related PR:** 4 | 5 | | Topic | Link | 6 | | ----- | ---- | 7 | | Separate validation from RESTStorage | http://issue.k8s.io/2977 | 8 | 9 | ## Background 10 | 11 | High level goals: 12 | * Enable an easy-to-use mechanism to provide admission control to cluster. 13 | * Enable a provider to support multiple admission control strategies or author 14 | their own. 15 | * Ensure any rejected request can propagate errors back to the caller with why 16 | the request failed. 17 | 18 | Authorization via policy is focused on answering if a user is authorized to 19 | perform an action. 20 | 21 | Admission Control is focused on if the system will accept an authorized action. 22 | 23 | Kubernetes may choose to dismiss an authorized action based on any number of 24 | admission control strategies. 25 | 26 | This proposal documents the basic design, and describes how any number of 27 | admission control plug-ins could be injected. 28 | 29 | Implementation of specific admission control strategies are handled in separate 30 | documents. 31 | 32 | ## kube-apiserver 33 | 34 | The kube-apiserver takes the following OPTIONAL arguments to enable admission 35 | control: 36 | 37 | | Option | Behavior | 38 | | ------ | -------- | 39 | | admission-control | Comma-delimited, ordered list of admission control choices to invoke prior to modifying or deleting an object. | 40 | | admission-control-config-file | File with admission control configuration parameters to boot-strap plug-in. | 41 | 42 | An **AdmissionControl** plug-in is an implementation of the following interface: 43 | 44 | ```go 45 | package admission 46 | 47 | // Attributes is an interface used by a plug-in to make an admission decision 48 | // on a individual request. 49 | type Attributes interface { 50 | GetNamespace() string 51 | GetKind() string 52 | GetOperation() string 53 | GetObject() runtime.Object 54 | } 55 | 56 | // Interface is an abstract, pluggable interface for Admission Control decisions. 57 | type Interface interface { 58 | // Admit makes an admission decision based on the request attributes 59 | // An error is returned if it denies the request. 60 | Admit(a Attributes) (err error) 61 | } 62 | ``` 63 | 64 | A **plug-in** must be compiled with the binary, and is registered as an 65 | available option by providing a name, and implementation of admission.Interface. 66 | 67 | ```go 68 | func init() { 69 | admission.RegisterPlugin("AlwaysDeny", func(client client.Interface, config io.Reader) (admission.Interface, error) { return NewAlwaysDeny(), nil }) 70 | } 71 | ``` 72 | 73 | A **plug-in** must be added to the imports in [plugins.go](../../cmd/kube-apiserver/app/plugins.go) 74 | 75 | ```go 76 | // Admission policies 77 | _ "k8s.io/kubernetes/plugin/pkg/admission/admit" 78 | _ "k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages" 79 | _ "k8s.io/kubernetes/plugin/pkg/admission/antiaffinity" 80 | ... 81 | _ "" 82 | ``` 83 | 84 | Invocation of admission control is handled by the **APIServer** and not 85 | individual **RESTStorage** implementations. 86 | 87 | This design assumes that **Issue 297** is adopted, and as a consequence, the 88 | general framework of the APIServer request/response flow will ensure the 89 | following: 90 | 91 | 1. Incoming request 92 | 2. Authenticate user 93 | 3. Authorize user 94 | 4. If operation=create|update|delete|connect, then admission.Admit(requestAttributes) 95 | - invoke each admission.Interface object in sequence 96 | 5. Case on the operation: 97 | - If operation=create|update, then validate(object) and persist 98 | - If operation=delete, delete the object 99 | - If operation=connect, exec 100 | 101 | If at any step, there is an error, the request is canceled. 102 | 103 | 104 | 105 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control.md?pixel)]() 106 | 107 | -------------------------------------------------------------------------------- /community/fixit201606.md: -------------------------------------------------------------------------------- 1 | #Fixit Event June 2016 2 | 3 | Google runs internal fixit weeks. During these weeks the teams set aside all critical deadlines, showstopper bugs, regular development -- everything -- to pull together to achieve a common goal. And, we invite the Kubernetes community to join the June 2016 fixit. 4 | 5 | Please take a look at anything that is [kind/flake] (https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3Akind%2Fflake%20) or, with our 1.4 focus on "mean time to dopamine" for our users, help [team/ux] (https://github.com/kubernetes/kubernetes/labels/team%2Fux) or spend time triaging, de-duplicating, or closing issues that were [opened before 20160101] (https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=created%3A%3C2016-01-01%20state%3Aopen%20) or check out the [issues in our docs] (https://github.com/kubernetes/kubernetes.github.io/issues) repository. All of these avenues will help Kubernetes improve the user and developer experiences with the project. 6 | 7 | Another important piece of fixit culture is rewards. **Everyone who contributes (as measured by engagements on Github) between 6/27 and 7/1 will receive Kubernetes stickers and all merged PRs that are contributed during the fixit will will receive a Kubernetes embroidered patch.** 8 | 9 | #But, wait, there's more! 10 | 11 | Since improvement isn't only about code and issues, we'd like to also help grow our community skills. And, to that end, below is a schedule of some interesting content and community events. 12 | 13 | The community meeting calendar is [available as an iCal] 14 | (https://calendar.google.com/calendar/ical/cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com/public/basic.ics) 15 | to subscribe to (simply copy and paste the url into any calendar 16 | product that supports the iCal format) or [html to view] 17 | (https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). 18 | All sessions listed below are listed on the community calendar and 19 | will be broadcast on the [community meeting URL] 20 | (https://zoom.us/my/kubernetescommunity). 21 | 22 | ##Tuesday 05:00 PT 23 | 24 | **Office Hours - Filip Grzadkowski [Recording] (https://www.youtube.com/watch?v=nKHYnvTr7LE)** 25 | 26 | K8s developers to hold office hours over VC where we ask members of the community to bring questions about how the system works. 27 | 28 | ##Thursday 10:00 PT 29 | 30 | **Community Meeting - Usual Suspects [Recording] (https://www.youtube.com/watch?v=gv3yMQ_F4_k)** 31 | 32 | The usual get together full of demos, discussions, and yummy yummy information. 33 | 34 | ##Thursday 11:00 PT 35 | 36 | **Office Hours - Vishnu Kannan [Recording] (https://www.youtube.com/watch?v=v_WI4P1ZEEQ)** 37 | 38 | K8s developers to hold office hours over VC where we ask members of the community to bring questions about how the system works. 39 | 40 | ##Friday 10:00 PT 41 | 42 | **1.3 Release Community Postmortem - Jason Singer Dumars [Recording] (https://youtu.be/kqKW7QLlwAA)** 43 | 44 | Following the release of 1.3 this is an opportunity to share what went well, what went poorly and discuss how we want to improve the development and release of 1.4. 45 | 46 | ##To be rescheduled. 47 | 48 | **A live debugging session. -- Janet Kuo** 49 | 50 | It's possible to learn so much while watching others debug. One of our team members will debug a problem during a video session. They will show the community tricks and informal techniques required to solve problems live. 51 | 52 | #Metrics 53 | 54 | For those of you who are tracking our progress... 55 | 56 | [Issues and PRs closed >2016-06-27] (https://github.com/search?utf8=%E2%9C%93&q=org%3Akubernetes+closed%3A%3E2016-06-27+&type=Issues&ref=searchresults) 57 | 58 | [Issues created and still open >2016-06-27] (https://github.com/search?utf8=%E2%9C%93&q=org%3Akubernetes+created%3A%3E2016-06-27+is%3Aopen&type=Issues&ref=searchresults) 59 | 60 | Alternately, [counting only closed issues] (https://github.com/search?utf8=%E2%9C%93&q=org%3Akubernetes+closed%3A%3E2016-06-27+is%3Aissue&type=Issues&ref=searchresults) 61 | 62 | -------------------------------------------------------------------------------- /sig-storage/contributing.md: -------------------------------------------------------------------------------- 1 | ### Ramping up on Kubernetes Storage 2 | For folks that prefer reading the docs first, we recommend reading our Storage Docs 3 | - [The Persistent Volume Framework] (http://kubernetes.io/docs/user-guide/persistent-volumes/) 4 | - [The new Dynamic Provisioning Proposal](https://github.com/pmorie/kubernetes/blob/7aa61dd0ff3908784acb4fa300713f02e62119af/docs/proposals/volume-provisioning.md) and [implementation](https://github.com/kubernetes/kubernetes/pull/29006) 5 | 6 | For folks that prefer a video overview, we recommend watching the following videos: 7 | - [The state of state](https://www.youtube.com/watch?v=jsTQ24CLRhI&index=6&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3) 8 | - [Kubernetes Storage 101](https://www.youtube.com/watch?v=ZqTHe6Xj0Ek&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3&index=38) 9 | - [Storage overview to SIG Apps](https://www.youtube.com/watch?v=DrLGxkFdDNc&feature=youtu.be&t=11m19s) 10 | 11 | Keep in mind that the video overviews reflect the state of the art at the time they were created. In Kubernetes we try very hard to maintain backwards compatibility but Kubernetes is a fast moving project and we do add features going forward and attending the Storage SIG meetings and the Storage SIG Google group are both good ways of continually staying up to speed. 12 | 13 | ### How to help 14 | 15 | We love having folks help in any capacity! We recommend you start by reading the overall [Kubernetes contributors guide](https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/CONTRIBUTING.md) 16 | 17 | ### Helping with Features 18 | If you have a feature idea, please submit a feature proposal PR first and put it on the [Storage SIG Meeting Agenda](https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.bag869lp4lyz). 19 | Our PR review bandwidth is fairly small, as such, we strongly recommend that you do not start writing the implementation before you've 20 | discussed the feature with the community. This helps the community understand what you're trying to do with the proposal and helps the 21 | community and you work through the approach until there is consensus. The community then will also be able to communicate with you how 22 | soon they will be able to review your proposal PR, to set expectations. However, generally speaking once the your proposal PR is merged, 23 | your implementation PR review and merge should go fairly quickly as the review is focused on the implementation quality and not 24 | what you are proposing. We are really trying to improve our test coverage and documentation, so please include functional tests, e2e tests 25 | and documentation in your implementation PR. 26 | 27 | ### Helping with Issues 28 | A great way to get involved is to pick an issue and help address it. We would love help here. Storage related issues are [listed here](https://github.com/kubernetes/kubernetes/labels/area%2Fstorage) 29 | 30 | ### Adding support for a new storage platform in Kubernetes 31 | For folks looking to add support for a new storage platform in Kubernetes, you have several options: 32 | - Write an in-tree volume plugin or provisioner: You can contribute a new in-tree volume plugin or provisioner, that gets built and ships with Kubernetes, for use within the Persistent Volume Framework. 33 | [See the Ceph RBD volume plugin example] (https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/rbd) or [the AWS Provisioner example](https://github.com/kubernetes/kubernetes/pull/29006) 34 | - Write a FlexVolume plugin: This is an out-of-tree volume plugin which you develop and build separately outside of Kubernetes. 35 | You then install the plugin on every Kubernetes host within your cluster and then [configure the plugin in Kubernetes as a FlexVolume](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/flexvolume) 36 | - Write a Provisioner Controller: You can write a separate controller that watches for pending claims with a specific selector label on them. 37 | Once an appropriate claim is discovered, the controller then provisions the appropriate storage intended for the claim and creates a corresponding 38 | persistent volume for the claim that includes the same label used in the original claim selector. This will ensure that the PV for the new 39 | storage provisioned gets bound to the original claim. 40 | -------------------------------------------------------------------------------- /contributors/devel/community-expectations.md: -------------------------------------------------------------------------------- 1 | ## Community Expectations 2 | 3 | Kubernetes is a community project. Consequently, it is wholly dependent on 4 | its community to provide a productive, friendly and collaborative environment. 5 | 6 | The first and foremost goal of the Kubernetes community to develop orchestration 7 | technology that radically simplifies the process of creating reliable 8 | distributed systems. However a second, equally important goal is the creation 9 | of a community that fosters easy, agile development of such orchestration 10 | systems. 11 | 12 | We therefore describe the expectations for 13 | members of the Kubernetes community. This document is intended to be a living one 14 | that evolves as the community evolves via the same PR and code review process 15 | that shapes the rest of the project. It currently covers the expectations 16 | of conduct that govern all members of the community as well as the expectations 17 | around code review that govern all active contributors to Kubernetes. 18 | 19 | ### Code of Conduct 20 | 21 | The most important expectation of the Kubernetes community is that all members 22 | abide by the Kubernetes [community code of conduct](../../code-of-conduct.md). 23 | Only by respecting each other can we develop a productive, collaborative 24 | community. 25 | 26 | ### Code review 27 | 28 | As a community we believe in the [value of code review for all contributions](collab.md). 29 | Code review increases both the quality and readability of our codebase, which 30 | in turn produces high quality software. 31 | 32 | However, the code review process can also introduce latency for contributors 33 | and additional work for reviewers that can frustrate both parties. 34 | 35 | Consequently, as a community we expect that all active participants in the 36 | community will also be active reviewers. 37 | 38 | We ask that active contributors to the project participate in the code review process 39 | in areas where that contributor has expertise. Active 40 | contributors are considered to be anyone who meets any of the following criteria: 41 | * Sent more than two pull requests (PRs) in the previous one month, or more 42 | than 20 PRs in the previous year. 43 | * Filed more than three issues in the previous month, or more than 30 issues in 44 | the previous 12 months. 45 | * Commented on more than pull requests in the previous month, or 46 | more than 50 pull requests in the previous 12 months. 47 | * Marked any PR as LGTM in the previous month. 48 | * Have *collaborator* permissions in the Kubernetes github project. 49 | 50 | In addition to these community expectations, any community member who wants to 51 | be an active reviewer can also add their name to an *active reviewer* file 52 | (location tbd) which will make them an active reviewer for as long as they 53 | are included in the file. 54 | 55 | #### Expectations of reviewers: Review comments 56 | 57 | Because reviewers are often the first points of contact between new members of 58 | the community and can significantly impact the first impression of the 59 | Kubernetes community, reviewers are especially important in shaping the 60 | Kubernetes community. Reviewers are highly encouraged to review the 61 | [code of conduct](../../code-of-conduct.md) and are strongly encouraged to go above 62 | and beyond the code of conduct to promote a collaborative, respectful 63 | Kubernetes community. 64 | 65 | #### Expectations of reviewers: Review latency 66 | 67 | Reviewers are expected to respond in a timely fashion to PRs that are assigned 68 | to them. Reviewers are expected to respond to an *active* PRs with reasonable 69 | latency, and if reviewers fail to respond, those PRs may be assigned to other 70 | reviewers. 71 | 72 | *Active* PRs are considered those which have a proper CLA (`cla:yes`) label 73 | and do not need rebase to be merged. PRs that do not have a proper CLA, or 74 | require a rebase are not considered active PRs. 75 | 76 | ## Thanks 77 | 78 | Many thanks in advance to everyone who contributes their time and effort to 79 | making Kubernetes both a successful system as well as a successful community. 80 | The strength of our software shines in the strengths of each individual 81 | community member. Thanks! 82 | 83 | 84 | 85 | 86 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/community-expectations.md?pixel)]() 87 | 88 | -------------------------------------------------------------------------------- /contributors/devel/on-call-user-support.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes "User Support" Rotation 2 | 3 | ### Traffic sources and responsibilities 4 | 5 | * [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and 6 | [ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): 7 | Respond to any thread that has no responses and is more than 6 hours old (over 8 | time we will lengthen this timeout to allow community responses). If you are not 9 | equipped to respond, it is your job to redirect to someone who can. 10 | 11 | * [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0) 12 | * [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15) 13 | * Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask) 14 | * Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic) 15 | 16 | * [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): 17 | Your job is to be on Slack, watching for questions and answering or redirecting 18 | as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/). 19 | 20 | * [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): 21 | Respond to any thread that has no responses and is more than 6 hours old (over 22 | time we will lengthen this timeout to allow community responses). If you are not 23 | equipped to respond, it is your job to redirect to someone who can. 24 | 25 | * [Legacy] [IRC](irc://irc.freenode.net/#google-containers) 26 | (irc.freenode.net #google-containers): watch IRC for questions and try to 27 | redirect users to Slack. Also check out the 28 | [IRC logs](https://botbot.me/freenode/google-containers/). 29 | 30 | In general, try to direct support questions to: 31 | 32 | 1. Documentation, such as the [user guide](../user-guide/README.md) and 33 | [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) 34 | 35 | 2. Stackoverflow 36 | 37 | If you see questions on a forum other than Stackoverflow, try to redirect them 38 | to Stackoverflow. Example response: 39 | 40 | ```code 41 | Please re-post your question to [stackoverflow] 42 | (http://stackoverflow.com/questions/tagged/kubernetes). 43 | 44 | We are trying to consolidate the channels to which questions for help/support 45 | are posted so that we can improve our efficiency in responding to your requests, 46 | and to make it easier for you to find answers to frequently asked questions and 47 | how to address common use cases. 48 | 49 | We regularly see messages posted in multiple forums, with the full response 50 | thread only in one place or, worse, spread across multiple forums. Also, the 51 | large volume of support issues on github is making it difficult for us to use 52 | issues to identify real bugs. 53 | 54 | The Kubernetes team scans stackoverflow on a regular basis, and will try to 55 | ensure your questions don't go unanswered. 56 | 57 | Before posting a new question, please search stackoverflow for answers to 58 | similar questions, and also familiarize yourself with: 59 | 60 | * [user guide](http://kubernetes.io/docs/user-guide/) 61 | * [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) 62 | 63 | Again, thanks for using Kubernetes. 64 | 65 | The Kubernetes Team 66 | ``` 67 | 68 | If you answer a question (in any of the above forums) that you think might be 69 | useful for someone else in the future, *please add it to one of the FAQs in the 70 | wiki*: 71 | 72 | * [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ) 73 | * [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ) 74 | * [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ). 75 | 76 | Getting it into the FAQ is more important than polish. Please indicate the date 77 | it was added, so people can judge the likelihood that it is out-of-date (and 78 | please correct any FAQ entries that you see contain out-of-date information). 79 | 80 | ### Contact information 81 | 82 | [@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the 83 | current person on call. 84 | 85 | 86 | 87 | 88 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-user-support.md?pixel)]() 89 | 90 | -------------------------------------------------------------------------------- /contributors/devel/updating-docs-for-feature-changes.md: -------------------------------------------------------------------------------- 1 | # How to update docs for new kubernetes features 2 | 3 | This document describes things to consider when updating Kubernetes docs for new features or changes to existing features (including removing features). 4 | 5 | ## Who should read this doc? 6 | 7 | Anyone making user facing changes to kubernetes. This is especially important for Api changes or anything impacting the getting started experience. 8 | 9 | ## What docs changes are needed when adding or updating a feature in kubernetes? 10 | 11 | ### When making Api changes 12 | 13 | *e.g. adding Deployments* 14 | * Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)* 15 | * Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item 16 | * Verify the guides / walkthroughs do not require any changes: 17 | * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** 18 | * [Hello Node](http://kubernetes.io/docs/hellonode/) 19 | * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) 20 | * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) 21 | * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) 22 | * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) 23 | * Verify the [landing page examples](http://kubernetes.io/docs/samples/) do not require any changes (those under "Recently updated samples") 24 | * **If your change will be recommended over the approaches shown in the "Updated" examples, then they must be updated to reflect your change** 25 | * If you are aware that your change will be recommended over the approaches shown in non-"Updated" examples, create an Issue 26 | * Verify the collection of docs under the "Guides" section do not require updates (may need to use grep for this until are docs are more organized) 27 | 28 | ### When making Tools changes 29 | 30 | *e.g. updating kube-dash or kubectl* 31 | * If changing kubectl, verify the guides / walkthroughs do not require any changes: 32 | * **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change** 33 | * [Hello Node](http://kubernetes.io/docs/hellonode/) 34 | * [K8s101](http://kubernetes.io/docs/user-guide/walkthrough/) 35 | * [K8S201](http://kubernetes.io/docs/user-guide/walkthrough/k8s201/) 36 | * [Guest-book](https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/guestbook) 37 | * [Thorough-walkthrough](http://kubernetes.io/docs/user-guide/) 38 | * If updating an existing tool 39 | * Search for any docs about the tool and update them 40 | * If adding a new tool for end users 41 | * Add a new page under [Guides](http://kubernetes.io/docs/) 42 | * **If removing a tool (kube-ui), make sure documentation that references it is updated appropriately!** 43 | 44 | ### When making cluster setup changes 45 | 46 | *e.g. adding Multi-AZ support* 47 | * Update the relevant [Administering Clusters](http://kubernetes.io/docs/) pages 48 | 49 | ### When making Kubernetes binary changes 50 | 51 | *e.g. adding a flag, changing Pod GC behavior, etc* 52 | * Add or update a page under [Configuring Kubernetes](http://kubernetes.io/docs/) 53 | 54 | ## Where do the docs live? 55 | 56 | 1. Most external user facing docs live in the [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo 57 | * Also see the *[general instructions](http://kubernetes.io/editdocs/)* for making changes to the docs website 58 | 2. Internal design and development docs live in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo 59 | 60 | ## Who should help review docs changes? 61 | 62 | * cc *@kubernetes/docs* 63 | * Changes to [kubernetes/docs](https://github.com/kubernetes/kubernetes.github.io) repo must have both a Technical Review and a Docs Review 64 | 65 | ## Tips for writing new docs 66 | 67 | * Try to keep new docs small and focused 68 | * Document pre-requisites (if they exist) 69 | * Document what concepts will be covered in the document 70 | * Include screen shots or pictures in documents for GUIs 71 | * *TODO once we have a standard widget set we are happy with* - include diagrams to help describe complex ideas (not required yet) 72 | 73 | 74 | 75 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/updating-docs-for-feature-changes.md?pixel)]() 76 | 77 | -------------------------------------------------------------------------------- /contributors/devel/kubelet-cri-networking.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | WARNING 7 | WARNING 9 | WARNING 11 | WARNING 13 | WARNING 15 | 16 |

PLEASE NOTE: This document applies to the HEAD of the source tree

17 | 18 | If you are using a released version of Kubernetes, you should 19 | refer to the docs that go with that version. 20 | 21 | Documentation for other releases can be found at 22 | [releases.k8s.io](http://releases.k8s.io). 23 | 24 | -- 25 | 26 | 27 | 28 | 29 | 30 | # Container Runtime Interface (CRI) Networking Specifications 31 | 32 | ## Introduction 33 | [Container Runtime Interface (CRI)](container-runtime-interface.md) is 34 | an ongoing project to allow container 35 | runtimes to integrate with kubernetes via a newly-defined API. This document 36 | specifies the network requirements for container runtime 37 | interface (CRI). CRI networking requirements expand upon kubernetes pod 38 | networking requirements. This document does not specify requirements 39 | from upper layers of kubernetes network stack, such as `Service`. More 40 | background on k8s networking could be found 41 | [here](http://kubernetes.io/docs/admin/networking/) 42 | 43 | ## Requirements 44 | 1. Kubelet expects the runtime shim to manage pod's network life cycle. Pod 45 | networking should be handled accordingly along with pod sandbox operations. 46 | * `RunPodSandbox` must set up pod's network. This includes, but is not limited 47 | to allocating a pod IP, configuring the pod's network interfaces and default 48 | network route. Kubelet expects the pod sandbox to have an IP which is 49 | routable within the k8s cluster, if `RunPodSandbox` returns successfully. 50 | `RunPodSandbox` must return an error if it fails to set up the pod's network. 51 | If the pod's network has already been set up, `RunPodSandbox` must skip 52 | network setup and proceed. 53 | * `StopPodSandbox` must tear down the pod's network. The runtime shim 54 | must return error on network tear down failure. If pod's network has 55 | already been torn down, `StopPodSandbox` must skip network tear down and proceed. 56 | * `RemovePodSandbox` may tear down pod's network, if the networking has 57 | not been torn down already. `RemovePodSandbox` must return error on 58 | network tear down failure. 59 | * Response from `PodSandboxStatus` must include pod sandbox network status. 60 | The runtime shim must return an empty network status if it failed 61 | to construct a network status. 62 | 63 | 2. User supplied pod networking configurations, which are NOT directly 64 | exposed by the kubernetes API, should be handled directly by runtime 65 | shims. For instance, `hairpin-mode`, `cni-bin-dir`, `cni-conf-dir`, `network-plugin`, 66 | `network-plugin-mtu` and `non-masquerade-cidr`. Kubelet will no longer handle 67 | these configurations after the transition to CRI is complete. 68 | 3. Network configurations that are exposed through the kubernetes API 69 | are communicated to the runtime shim through `UpdateRuntimeConfig` 70 | interface, e.g. `podCIDR`. For each runtime and network implementation, 71 | some configs may not be applicable. The runtime shim may handle or ignore 72 | network configuration updates from `UpdateRuntimeConfig` interface. 73 | 74 | ## Extensibility 75 | * Kubelet is oblivious to how the runtime shim manages networking, i.e 76 | runtime shim is free to use [CNI](https://github.com/containernetworking/cni), 77 | [CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md) or 78 | any other implementation as long as the CRI networking requirements and 79 | k8s networking requirements are satisfied. 80 | * Runtime shims have full visibility into pod networking configurations. 81 | * As more network feature arrives, CRI will evolve. 82 | 83 | ## Related Issues 84 | * Kubelet network plugin for client/server container runtimes [#28667](https://github.com/kubernetes/kubernetes/issues/28667) 85 | * CRI networking umbrella issue [#37316](https://github.com/kubernetes/kubernetes/issues/37316) -------------------------------------------------------------------------------- /sig-apps/agenda.md: -------------------------------------------------------------------------------- 1 | # Agenda 2 | ~~Join us for sig-apps on Wednesdays at 9am PDT on Zoom: zoom.us/my/sig.apps~~ 3 | 4 | SIG-Apps meetings are now on **Mondays at 9am PDT** on Zoom: https://zoom.us/my/sig.apps 5 | 6 | _Note, the [minutes and agenda have moved to Google Docs](https://docs.google.com/document/d/1LZLBGW2wRDwAfdBNHJjFfk9CFoyZPcIYGWU7R1PQ3ng/edit#)._ 7 | 8 | ## August 17, 2016 9 | * Intro / Agenda 10 | * Brian Hardock of the Deis Helm team will demo [Helm](www.github.com/kubernetes/helm). 11 | * Helm is a tool for creating and managing Kubernetes native applications. 12 | * Adnan Abdulhussein from Bitnami will walk us through example helm charts for Kubernetes native applications 13 | 14 | ## August 10, 2016 15 | * Intro / Agenda 16 | * PoC demo and discussion of [AppController](https://github.com/kubernetes/kubernetes/issues/29453) 17 | * Working discussion of Pet Set beta steps led by [Clayton Coleman](https://twitter.com/smarterclayton) and [Prashanth B.](https://github.com/bprashanth) 18 | * Demo of [nanokube](https://github.com/metral/nanokube) by [Mike Metral](https://twitter.com/mikemetral) 19 | 20 | ## August 3, 2016 [[notes](minutes/2016-08-03.md)] 21 | * Intro / Overview of Agenda 22 | * [Ryan Jarvinen](https://twitter.com/ryanj?lang=en) will talk about "Defining "Applications" for Kubernetes (and OpenShift)" 23 | * In a world of distributed architecture, the term "Application" can be a difficult thing to define. RyanJ will provide a few examples of how to package and distribute applications for Kubernetes. 24 | * Open discussion / questions 25 | 26 | ## July 27, 2016 [[notes](minutes/2016-07-27.md)] 27 | * Intro 28 | * [Gerred Dillon](https://twitter.com/justicefries) will do a demo of [Deis](https://deis.com/) 29 | * Group discussion on Deis 30 | * Matt Farina will give an update on results of the sig-apps survey 31 | * Open discussion around sig-apps survey 32 | 33 | ## July 20, 2016 [[notes & video](minutes/2016-07-20.md)] 34 | * Intro 35 | * Janet Kuo, Software Engineer at Google working on Kubernetes, will do a demo of deployment features in Kubernetes and give an overview of Deployments, Replica Sets, and Replication Controllers. (*needs confirmation*) 36 | * Saad Ali, Software Engineer at Google working on Kubernetes, will do a demo and discussion around Volume features. 37 | 38 | ## July 13, 2016 [[notes & video](minutes/2016-07-13.md)] 39 | * Intro 40 | * Christian from Jetstack will do a demo of Kube-Lego 41 | * ~~Mackenzie Burnett of Redspread will do a demo of [Spread](https://github.com/redspread/spread)~~ 42 | * Matt Farina will update us on the analysis of the SIG-Apps survey 43 | 44 | ## July 6, 2016 [[notes & video](minutes/2016-07-06.md)] 45 | * Intro 46 | * [Chris Love](https://twitter.com/chrislovecnm) will be talking about the covering deploying Cassandra on Kubernetes 1.3 at scale. 47 | * Chris is a Senior DevOps Open Source Consultant for Datapipe, and head of DevOps RnD for Apollobit. 48 | * His demo application is located at https://github.com/k8s-for-greeks/gpmr. 49 | * Open discussion / questions 50 | 51 | ## June 29, 2016 [[notes & video](minutes/2016-06-29.md)] 52 | * Intro 53 | * Bart Spaans, author of [KubeFuse](https://github.com/opencredo/kubefuse/), will do a demo and discussion. 54 | * The DRUD team from [New Media Denver](https://www.newmediadenver.com/) will do a demo and discussion of how they use Kubernetes Jobs 55 | * Follow up on SIG-Apps survey status 56 | 57 | ## June 22, 2016 [[notes](minutes/2016-06-22.md)] 58 | * Intro 59 | * Antoine Legrand will demo [KPM](https://github.com/kubespray/kpm) 60 | * Open discussion / questions 61 | 62 | ## June 15, 2016 [[notes & video](minutes/2016-06-15.md)] 63 | * Intro 64 | * Eric Chiang of CoreOS will demo and discuss the freshly merged role based access control features in 1.3 65 | * Ben Hall, founder of Katacoda, will talk about the Kubernetes scenarios on Katacoda. 66 | 67 | ## June 8, 2016 [[notes & video](minutes/2016-06-08.md)] 68 | * Intro 69 | * Adnan Abdulhussein from Bitnami will demo the Stacksmith workflow 70 | * Follow up on PetSet feedback 71 | * Follow up on SIG-Apps survey status 72 | 73 | ## May 31, 2016 74 | * Canceled in honor of a short week 75 | 76 | ## May 25, 2016 [[notes & video](https://github.com/kubernetes/community/blob/master/sig-apps/minutes/2016-05-25.md)] 77 | * Intro 78 | * Mike Metral of Rackspace will demo how to recursively process configuration files with the -R flag 79 | 80 | ## May 18, 2016 [[notes](https://github.com/kubernetes/community/blob/master/sig-apps/minutes/2016-05-18.md)] 81 | * Intro 82 | * Discussion on the future of SIG-Apps 83 | * Pranshanth B. of Google will demo PetSet 84 | -------------------------------------------------------------------------------- /community/developer-summit-2016/application_service_definition_notes.md: -------------------------------------------------------------------------------- 1 | # Service/Application Definition 2 | 3 | We think we need to help out the developers in how do we organize our services and how do I define them nicely and deploy on our orchestrator of choice. Writing the Kube files is a steep learning curve. So can we have something which is a little bit easier? 4 | 5 | Helm solves one purpose for this. 6 | 7 | Helm contrib: one of the things folks as us is they start from a dockerfile, and they want to have a workflow where they go from dockerfile-->imagebuild-->registry-->resource def. 8 | 9 | There are different ways to package applications. There's the potential for a lot of fragmentation in multi-pod application definitions. Can we create standards here? 10 | 11 | We want to build and generate manifests with one tool. We want "fun in five" that is have it up and running in five minutes or less. 12 | 13 | Another issue is testing mode; currently production-quality Helm charts don't really work on minikube,. There's some issues around this which we know about. We need dummy PVCs, LoadBalancer, etc. Also DNS and Ingress. 14 | 15 | We need the 80% case, Fabric8 is a good example of this. We need a good set of boundary conditions so that the new definition doesn't get bigger than the Kube implementation. Affinity/placement is a good example of "other 20%". 16 | 17 | We also need to look at how to get developer feedback on this so that we're building what they need. Pradeepto did a comparison of Kompose vs. Docker Compose for simplicity/usability. 18 | 19 | One of the things we're discussing the Kompose API. We want to get rid of this and supply something which people can use directly with kuberntes. A bunch of shops only have developers. Someone asked though what's so complicated with Kube definitions. Have we identified what gives people trouble with this? We push too many concepts on developers too quickly. We want some high-level abstract types which represent the 95% use case. Then we could decompose these to the real types. 20 | 21 | What's the gap between compose files and the goal? As an example, say you want to run a webserver pod. You have to deal with ingress, and service, and replication controller, and a bunch of other things. What's the equivalent of "docker run" which is easy to get. The critical thing is how fast you can learn it. 22 | 23 | We also need to have reversability so that if you use compose you don't have to edit the kube config after deployment, you can still use the simple concepts. The context of the chart needs to not be lost. 24 | 25 | There was discussion of templating applications. Person argued that it's really a type system. Erin suggested that it's more like a personal template, like the car seat configuration. 26 | 27 | There's a need to let developers work on "their machine" using the same spec. Looking through docker-compose, it's about what developers want, not what kubernetes wants. This needs to focus on what developers know, not the kube objects. 28 | 29 | Someone argued that if we use deployments it's really not that complex. We probably use too much complexity in our examples. But if we want to do better than docker-compose, what does it look like? Having difficulty imagining what that is. 30 | 31 | Maybe the best approach is to create a list of what we need for "what is my app" and compare it with current deployment files. 32 | 33 | There was a lot of discussion of what this looks like. 34 | 35 | Is this different from what the PAASes already do? It's not that different, we want something to work with core kubernetes, and also PAASes are opinionated in different ways. 36 | 37 | Being able to view an application as a single unifying concept is a major desire. Want to click "my app" and see all of the objects associated with it. It would be an overlay on top of Kubernetes, not something in core. 38 | 39 | One pending feature is that you can't look up different types of controllers in the API, that's going to be fixed. Another one is that we can't trace the dependencies; helm doesn't label all of the components deployed with the app. 40 | 41 | Need to identify things which are missing in core kubernetes, if there are any. 42 | 43 | ## Action Items: 44 | 45 | * Reduce the verbosity of injecting configmaps. We want to simplify the main kubernetes API. For example, there should be a way to map all variables to ENV as one statement. 46 | * Document where things are hard to understand with deployments. 47 | * Document where things don't work with minikube and deployments. 48 | * Document what's the path from minecraft.jar to running it on a kubernetes cluster? 49 | -------------------------------------------------------------------------------- /community/developer-summit-2016/Kubernetes_Dev_Summit.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Dev Summit 2 | # 3 | 4 | ## Edit - Event Location 5 | ## 6 | The event is on the 4th Floor of the *Union Street Tower* of the Sheraton Seattle Hotel. 7 | 8 | # About the Event 9 | # 10 | The Kubernetes Developers' Summit provides an avenue for Kubernetes 11 | developers to connect face to face and mindshare about future community 12 | development and community governance endeavors. 13 | 14 | In some sense, the summit is a real-life extension of the community 15 | meetings and SIG meetings. 16 | 17 | ## Event Format 18 | ## 19 | The Dev Summit is a "loosely-structured [unconference][uncf]". Rather 20 | than speakers and presentations, we will have moderators/facilitators 21 | and discussion topics, alongside all-day completely unstructured 22 | hacking. 23 | 24 | The discussion sessions will be in an open fishbowl format — rings of 25 | chairs, with inner rings driving the discussion — where anyone can 26 | contribute. The various sessions will be proposed and voted on by the 27 | community in the weeks leading up to the event. This allows the 28 | community to motivate the events of the day without dedicating precious 29 | day-of time to choosing the sessions and making a schedule. 30 | 31 | There will be 3 rooms dedicated to these sessions running in parallel 32 | all day. Each session should last between 45 minutes and an hour. 33 | 34 | Then, there will be 2 smaller rooms for hacking / unstructured 35 | discussion all day. 36 | 37 | #### Who Should Go? 38 | #### 39 | The target audience is the Kubernetes developer community. The group 40 | will be relatively small (~120-150 attendees), to improve communication 41 | and facilitate easier decision-making. The majority of the attendees 42 | will be selected from key company team and SIG leaders, power-users, and 43 | the most active contributors. An additional pool of tickets will be 44 | awarded via lottery. **Any interested party** should enter the lottery via 45 | [this form][lotfrm]. Invitees will receive an invitation on or before October 12th. 46 | RSVP information will be available in the invitation email. Tickets are 47 | not transferrable. 48 | 49 | Please note that this Summit is not the right environment for people to 50 | *start* learning about the Kubernetes project. There are plenty of 51 | [meetups][mtp] organized by global user groups where one can get 52 | involved initially. 53 | 54 | #### Call for Proposals 55 | #### 56 | Proposals for discussion topics can be submitted through [this 57 | form][propfrm] by September 30, 23:59 PT. If you propose a session topic, 58 | please be prepared to attend and facilitate the session if it gets 59 | chosen. Other members will help with moderating, either as volunteer 60 | co-facilitators or as members of the larger discussion group. 61 | 62 | Suggestions for session topic themes: 63 | 64 | * Hashing out technical issues 65 | * Long term component / SIG planning 66 | 67 | In early October, proposal topics will be posted to the kubernetes-dev 68 | mailing list and voted on via [CIVS][civs], the Condorcet Internet 69 | Voting Service. A schedule will be made from the winning topics with 70 | some editorial license, and the schedule will be announced to the group 71 | at least a week before the event. 72 | 73 | ## When & Where? 74 | ## 75 | The Dev Summit will follow [Kubecon][kbc] on November 10th, 2016. 76 | Fortunately for those who attend Kubecon, the Dev Summit will be at the 77 | same venue, [the Sheraton Seattle Hotel][sher]. As of now, the day's 78 | activities should run from breakfast being served at 8 AM to closing 79 | remarks ending around 3:30 PM, with an external happy hour to follow. 80 | 81 | ## Desired outcomes 82 | ## 83 | * Generate notes from the sessions to feed the project's documentation 84 | and knowledge base, and also to keep non-attendees plugged in 85 | * Make (and document) recommendations and decisions for the near-term and 86 | mid-term future of the project 87 | * Come up with upcoming action items, as well as leaders for those action items, for the various topics that we discuss 88 | 89 | [//]: # (Reference Links) 90 | [uncf]: 91 | [mtp]: 92 | [lotfrm]: 93 | [propfrm]: 94 | [civs]: 95 | [kbc]: 96 | [sher]: 97 | -------------------------------------------------------------------------------- /contributors/design-proposals/kubelet-auth.md: -------------------------------------------------------------------------------- 1 | # Kubelet Authentication / Authorization 2 | 3 | Author: Jordan Liggitt (jliggitt@redhat.com) 4 | 5 | ## Overview 6 | 7 | The kubelet exposes endpoints which give access to data of varying sensitivity, 8 | and allow performing operations of varying power on the node and within containers. 9 | There is no built-in way to limit or subdivide access to those endpoints, 10 | so deployers must secure the kubelet API using external, ad-hoc methods. 11 | 12 | This document proposes a method for authenticating and authorizing access 13 | to the kubelet API, using interfaces and methods that complement the existing 14 | authentication and authorization used by the API server. 15 | 16 | ## Preliminaries 17 | 18 | This proposal assumes the existence of: 19 | 20 | * a functioning API server 21 | * the SubjectAccessReview and TokenReview APIs 22 | 23 | It also assumes each node is additionally provisioned with the following information: 24 | 25 | 1. Location of the API server 26 | 2. Any CA certificates necessary to trust the API server's TLS certificate 27 | 3. Client credentials authorized to make SubjectAccessReview and TokenReview API calls 28 | 29 | ## API Changes 30 | 31 | None 32 | 33 | ## Kubelet Authentication 34 | 35 | Enable starting the kubelet with one or more of the following authentication methods: 36 | 37 | * x509 client certificate 38 | * bearer token 39 | * anonymous (current default) 40 | 41 | For backwards compatibility, the default is to enable anonymous authentication. 42 | 43 | ### x509 client certificate 44 | 45 | Add a new `--client-ca-file=[file]` option to the kubelet. 46 | When started with this option, the kubelet authenticates incoming requests using x509 47 | client certificates, validated against the root certificates in the provided bundle. 48 | The kubelet will reuse the x509 authenticator already used by the API server. 49 | 50 | The master API server can already be started with `--kubelet-client-certificate` and 51 | `--kubelet-client-key` options in order to make authenticated requests to the kubelet. 52 | 53 | ### Bearer token 54 | 55 | Add a new `--authentication-token-webhook=[true|false]` option to the kubelet. 56 | When true, the kubelet authenticates incoming requests with bearer tokens by making 57 | `TokenReview` API calls to the API server. 58 | 59 | The kubelet will reuse the webhook authenticator already used by the API server, configured 60 | to call the API server using the connection information already provided to the kubelet. 61 | 62 | To improve performance of repeated requests with the same bearer token, the 63 | `--authentication-token-webhook-cache-ttl` option supported by the API server 64 | would be supported. 65 | 66 | ### Anonymous 67 | 68 | Add a new `--anonymous-auth=[true|false]` option to the kubelet. 69 | When true, requests to the secure port that are not rejected by other configured 70 | authentication methods are treated as anonymous requests, and given a username 71 | of `system:anonymous` and a group of `system:unauthenticated`. 72 | 73 | ## Kubelet Authorization 74 | 75 | Add a new `--authorization-mode` option to the kubelet, specifying one of the following modes: 76 | * `Webhook` 77 | * `AlwaysAllow` (current default) 78 | 79 | For backwards compatibility, the authorization mode defaults to `AlwaysAllow`. 80 | 81 | ### Webhook 82 | 83 | Webhook mode converts the request to authorization attributes, and makes a `SubjectAccessReview` 84 | API call to check if the authenticated subject is allowed to make a request with those attributes. 85 | This enables authorization policy to be centrally managed by the authorizer configured for the API server. 86 | 87 | The kubelet will reuse the webhook authorizer already used by the API server, configured 88 | to call the API server using the connection information already provided to the kubelet. 89 | 90 | To improve performance of repeated requests with the same authenticated subject and request attributes, 91 | the same webhook authorizer caching options supported by the API server would be supported: 92 | 93 | * `--authorization-webhook-cache-authorized-ttl` 94 | * `--authorization-webhook-cache-unauthorized-ttl` 95 | 96 | ### AlwaysAllow 97 | 98 | This mode allows any authenticated request. 99 | 100 | ## Future Work 101 | 102 | * Add support for CRL revocation for x509 client certificate authentication (http://issue.k8s.io/18982) 103 | 104 | 105 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/kubelet-auth.md?pixel)]() 106 | 107 | -------------------------------------------------------------------------------- /contributors/design-proposals/volume-ownership-management.md: -------------------------------------------------------------------------------- 1 | ## Volume plugins and idempotency 2 | 3 | Currently, volume plugins have a `SetUp` method which is called in the context of a higher-level 4 | workflow within the kubelet which has externalized the problem of managing the ownership of volumes. 5 | This design has a number of drawbacks that can be mitigated by completely internalizing all concerns 6 | of volume setup behind the volume plugin `SetUp` method. 7 | 8 | ### Known issues with current externalized design 9 | 10 | 1. The ownership management is currently repeatedly applied, which breaks packages that require 11 | special permissions in order to work correctly 12 | 2. There is a gap between files being mounted/created by volume plugins and when their ownership 13 | is set correctly; race conditions exist around this 14 | 3. Solving the correct application of ownership management in an externalized model is difficult 15 | and makes it clear that the a transaction boundary is being broken by the externalized design 16 | 17 | ### Additional issues with externalization 18 | 19 | Fully externalizing any one concern of volumes is difficult for a number of reasons: 20 | 21 | 1. Many types of idempotence checks exist, and are used in a variety of combinations and orders 22 | 2. Workflow in the kubelet becomes much more complex to handle: 23 | 1. composition of plugins 24 | 2. correct timing of application of ownership management 25 | 3. callback to volume plugins when we know the whole `SetUp` flow is complete and correct 26 | 4. callback to touch sentinel files 27 | 5. etc etc 28 | 3. We want to support fully external volume plugins -- would require complex orchestration / chatty 29 | remote API 30 | 31 | ## Proposed implementation 32 | 33 | Since all of the ownership information is known in advance of the call to the volume plugin `SetUp` 34 | method, we can easily internalize these concerns into the volume plugins and pass the ownership 35 | information to `SetUp`. 36 | 37 | The volume `Builder` interface's `SetUp` method changes to accept the group that should own the 38 | volume. Plugins become responsible for ensuring that the correct group is applied. The volume 39 | `Attributes` struct can be modified to remove the `SupportsOwnershipManagement` field. 40 | 41 | ```go 42 | package volume 43 | 44 | type Builder interface { 45 | // other methods omitted 46 | 47 | // SetUp prepares and mounts/unpacks the volume to a self-determined 48 | // directory path and returns an error. The group ID that should own the volume 49 | // is passed as a parameter. Plugins may choose to ignore the group ID directive 50 | // in the event that they do not support it (example: NFS). A group ID of -1 51 | // indicates that the group ownership of the volume should not be modified by the plugin. 52 | // 53 | // SetUp will be called multiple times and should be idempotent. 54 | SetUp(gid int64) error 55 | } 56 | ``` 57 | 58 | Each volume plugin will have to change to support the new `SetUp` signature. The existing 59 | ownership management code will be refactored into a library that volume plugins can use: 60 | 61 | ```go 62 | package volume 63 | 64 | func ManageOwnership(path string, fsGroup int64) error { 65 | // 1. recursive chown of path 66 | // 2. make path +setgid 67 | } 68 | ``` 69 | 70 | The workflow from the Kubelet's perspective for handling volume setup and refresh becomes: 71 | 72 | ```go 73 | // go-ish pseudocode 74 | func mountExternalVolumes(pod) error { 75 | podVolumes := make(kubecontainer.VolumeMap) 76 | for i := range pod.Spec.Volumes { 77 | volSpec := &pod.Spec.Volumes[i] 78 | var fsGroup int64 = 0 79 | if pod.Spec.SecurityContext != nil && 80 | pod.Spec.SecurityContext.FSGroup != nil { 81 | fsGroup = *pod.Spec.SecurityContext.FSGroup 82 | } else { 83 | fsGroup = -1 84 | } 85 | 86 | // Try to use a plugin for this volume. 87 | plugin := volume.NewSpecFromVolume(volSpec) 88 | builder, err := kl.newVolumeBuilderFromPlugins(plugin, pod) 89 | if err != nil { 90 | return err 91 | } 92 | if builder == nil { 93 | return errUnsupportedVolumeType 94 | } 95 | 96 | err := builder.SetUp(fsGroup) 97 | if err != nil { 98 | return nil 99 | } 100 | } 101 | 102 | return nil 103 | } 104 | ``` 105 | 106 | 107 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/volume-ownership-management.md?pixel)]() 108 | 109 | -------------------------------------------------------------------------------- /contributors/devel/adding-an-APIGroup.md: -------------------------------------------------------------------------------- 1 | Adding an API Group 2 | =============== 3 | 4 | This document includes the steps to add an API group. You may also want to take 5 | a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and 6 | PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API 7 | groups. 8 | 9 | Please also read about [API conventions](api-conventions.md) and 10 | [API changes](api_changes.md) before adding an API group. 11 | 12 | ### Your core group package: 13 | 14 | We plan on improving the way the types are factored in the future; see 15 | [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions 16 | in which this might evolve. 17 | 18 | 1. Create a folder in pkg/apis to hold your group. Create types.go in 19 | pkg/apis/``/ and pkg/apis/``/``/ to define API objects 20 | in your group; 21 | 22 | 2. Create pkg/apis/``/{register.go, ``/register.go} to register 23 | this group's API objects to the encoding/decoding scheme (e.g., 24 | [pkg/apis/authentication/register.go](../../pkg/apis/authentication/register.go) and 25 | [pkg/apis/authentication/v1beta1/register.go](../../pkg/apis/authentication/v1beta1/register.go); 26 | 27 | 3. Add a pkg/apis/``/install/install.go, which is responsible for adding 28 | the group to the `latest` package, so that other packages can access the group's 29 | meta through `latest.Group`. You probably only need to change the name of group 30 | and version in the [example](../../pkg/apis/authentication/install/install.go)). You 31 | need to import this `install` package in {pkg/master, 32 | pkg/client/unversioned}/import_known_versions.go, if you want to make your group 33 | accessible to other packages in the kube-apiserver binary, binaries that uses 34 | the client package. 35 | 36 | Step 2 and 3 are mechanical, we plan on autogenerate these using the 37 | cmd/libs/go2idl/ tool. 38 | 39 | ### Scripts changes and auto-generated code: 40 | 41 | 1. Generate conversions and deep-copies: 42 | 43 | 1. Add your "group/" or "group/version" into 44 | cmd/libs/go2idl/conversion-gen/main.go; 45 | 2. Make sure your pkg/apis/``/`` directory has a doc.go file 46 | with the comment `// +k8s:deepcopy-gen=package,register`, to catch the 47 | attention of our generation tools. 48 | 3. Make sure your `pkg/apis//` directory has a doc.go file 49 | with the comment `// +k8s:conversion-gen=`, to catch the 50 | attention of our generation tools. For most APIs the only target you 51 | need is `k8s.io/kubernetes/pkg/apis/` (your internal API). 52 | 3. Make sure your `pkg/apis/` and `pkg/apis//` directories 53 | have a doc.go file with the comment `+groupName=.k8s.io`, to correctly 54 | generate the DNS-suffixed group name. 55 | 5. Run hack/update-all.sh. 56 | 57 | 2. Generate files for Ugorji codec: 58 | 59 | 1. Touch types.generated.go in pkg/apis/``{/, ``}; 60 | 2. Run hack/update-codecgen.sh. 61 | 62 | 3. Generate protobuf objects: 63 | 64 | 1. Add your group to `cmd/libs/go2idl/go-to-protobuf/protobuf/cmd.go` to 65 | `New()` in the `Packages` field 66 | 2. Run hack/update-generated-protobuf.sh 67 | 68 | ### Client (optional): 69 | 70 | We are overhauling pkg/client, so this section might be outdated; see 71 | [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client 72 | package might evolve. Currently, to add your group to the client package, you 73 | need to: 74 | 75 | 1. Create pkg/client/unversioned/``.go, define a group client interface 76 | and implement the client. You can take pkg/client/unversioned/extensions.go as a 77 | reference. 78 | 79 | 2. Add the group client interface to the `Interface` in 80 | pkg/client/unversioned/client.go and add method to fetch the interface. Again, 81 | you can take how we add the Extensions group there as an example. 82 | 83 | 3. If you need to support the group in kubectl, you'll also need to modify 84 | pkg/kubectl/cmd/util/factory.go. 85 | 86 | ### Make the group/version selectable in unit tests (optional): 87 | 88 | 1. Add your group in pkg/api/testapi/testapi.go, then you can access the group 89 | in tests through testapi.``; 90 | 91 | 2. Add your "group/version" to `KUBE_TEST_API_VERSIONS` in 92 | hack/make-rules/test.sh and hack/make-rules/test-integration.sh 93 | 94 | TODO: Add a troubleshooting section. 95 | 96 | 97 | 98 | 99 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/adding-an-APIGroup.md?pixel)]() 100 | 101 | -------------------------------------------------------------------------------- /contributors/devel/godep.md: -------------------------------------------------------------------------------- 1 | # Using godep to manage dependencies 2 | 3 | This document is intended to show a way for managing `vendor/` tree dependencies 4 | in Kubernetes. If you are not planning on managing `vendor` dependencies go here 5 | [Godep dependency management](development.md#godep-dependency-management). 6 | 7 | ## Alternate GOPATH for installing and using godep 8 | 9 | There are many ways to build and host Go binaries. Here is one way to get 10 | utilities like `godep` installed: 11 | 12 | Create a new GOPATH just for your go tools and install godep: 13 | 14 | ```sh 15 | export GOPATH=$HOME/go-tools 16 | mkdir -p $GOPATH 17 | go get -u github.com/tools/godep 18 | ``` 19 | 20 | Add this $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: 21 | 22 | ```sh 23 | export GOPATH=$HOME/go-tools 24 | export PATH=$PATH:$GOPATH/bin 25 | ``` 26 | 27 | ## Using godep 28 | 29 | Here's a quick walkthrough of one way to use godeps to add or update a 30 | Kubernetes dependency into `vendor/`. For more details, please see the 31 | instructions in [godep's documentation](https://github.com/tools/godep). 32 | 33 | 1) Devote a directory to this endeavor: 34 | 35 | _Devoting a separate directory is not strictly required, but it is helpful to 36 | separate dependency updates from other changes._ 37 | 38 | ```sh 39 | export KPATH=$HOME/code/kubernetes 40 | mkdir -p $KPATH/src/k8s.io 41 | cd $KPATH/src/k8s.io 42 | git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git # assumes your fork is 'kubernetes' 43 | # Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. 44 | ``` 45 | 46 | 2) Set up your GOPATH. 47 | 48 | ```sh 49 | # This will *not* let your local builds see packages that exist elsewhere on your system. 50 | export GOPATH=$KPATH 51 | ``` 52 | 53 | 3) Populate your new GOPATH. 54 | 55 | ```sh 56 | cd $KPATH/src/k8s.io/kubernetes 57 | godep restore 58 | ``` 59 | 60 | 4) Next, you can either add a new dependency or update an existing one. 61 | 62 | To add a new dependency is simple (if a bit slow): 63 | 64 | ```sh 65 | cd $KPATH/src/k8s.io/kubernetes 66 | DEP=example.com/path/to/dependency 67 | godep get $DEP/... 68 | # Now change code in Kubernetes to use the dependency. 69 | ./hack/godep-save.sh 70 | ``` 71 | 72 | To update an existing dependency is a bit more complicated. Godep has an 73 | `update` command, but none of us can figure out how to actually make it work. 74 | Instead, this procedure seems to work reliably: 75 | 76 | ```sh 77 | cd $KPATH/src/k8s.io/kubernetes 78 | DEP=example.com/path/to/dependency 79 | # NB: For the next step, $DEP is assumed be the repo root. If it is actually a 80 | # subdir of the repo, use the repo root here. This is required to keep godep 81 | # from getting angry because `godep restore` left the tree in a "detached head" 82 | # state. 83 | rm -rf $KPATH/src/$DEP # repo root 84 | godep get $DEP/... 85 | # Change code in Kubernetes, if necessary. 86 | rm -rf Godeps 87 | rm -rf vendor 88 | ./hack/godep-save.sh 89 | git checkout -- $(git status -s | grep "^ D" | awk '{print $2}' | grep ^Godeps) 90 | ``` 91 | 92 | _If `go get -u path/to/dependency` fails with compilation errors, instead try 93 | `go get -d -u path/to/dependency` to fetch the dependencies without compiling 94 | them. This is unusual, but has been observed._ 95 | 96 | After all of this is done, `git status` should show you what files have been 97 | modified and added/removed. Make sure to `git add` and `git rm` them. It is 98 | commonly advised to make one `git commit` which includes just the dependency 99 | update and Godeps files, and another `git commit` that includes changes to 100 | Kubernetes code to use the new/updated dependency. These commits can go into a 101 | single pull request. 102 | 103 | 5) Before sending your PR, it's a good idea to sanity check that your 104 | Godeps.json file and the contents of `vendor/ `are ok by running `hack/verify-godeps.sh` 105 | 106 | _If `hack/verify-godeps.sh` fails after a `godep update`, it is possible that a 107 | transitive dependency was added or removed but not updated by godeps. It then 108 | may be necessary to perform a `hack/godep-save.sh` to pick up the transitive 109 | dependency changes._ 110 | 111 | It is sometimes expedient to manually fix the /Godeps/Godeps.json file to 112 | minimize the changes. However without great care this can lead to failures 113 | with `hack/verify-godeps.sh`. This must pass for every PR. 114 | 115 | 6) If you updated the Godeps, please also update `Godeps/LICENSES` by running 116 | `hack/update-godep-licenses.sh`. 117 | 118 | 119 | 120 | 121 | 122 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/godep.md?pixel)]() 123 | 124 | -------------------------------------------------------------------------------- /contributors/design-proposals/scheduler_extender.md: -------------------------------------------------------------------------------- 1 | # Scheduler extender 2 | 3 | There are three ways to add new scheduling rules (predicates and priority 4 | functions) to Kubernetes: (1) by adding these rules to the scheduler and 5 | recompiling (described here: 6 | https://github.com/kubernetes/kubernetes/blob/master/docs/devel/scheduler.md), 7 | (2) implementing your own scheduler process that runs instead of, or alongside 8 | of, the standard Kubernetes scheduler, (3) implementing a "scheduler extender" 9 | process that the standard Kubernetes scheduler calls out to as a final pass when 10 | making scheduling decisions. 11 | 12 | This document describes the third approach. This approach is needed for use 13 | cases where scheduling decisions need to be made on resources not directly 14 | managed by the standard Kubernetes scheduler. The extender helps make scheduling 15 | decisions based on such resources. (Note that the three approaches are not 16 | mutually exclusive.) 17 | 18 | When scheduling a pod, the extender allows an external process to filter and 19 | prioritize nodes. Two separate http/https calls are issued to the extender, one 20 | for "filter" and one for "prioritize" actions. To use the extender, you must 21 | create a scheduler policy configuration file. The configuration specifies how to 22 | reach the extender, whether to use http or https and the timeout. 23 | 24 | ```go 25 | // Holds the parameters used to communicate with the extender. If a verb is unspecified/empty, 26 | // it is assumed that the extender chose not to provide that extension. 27 | type ExtenderConfig struct { 28 | // URLPrefix at which the extender is available 29 | URLPrefix string `json:"urlPrefix"` 30 | // Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender. 31 | FilterVerb string `json:"filterVerb,omitempty"` 32 | // Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender. 33 | PrioritizeVerb string `json:"prioritizeVerb,omitempty"` 34 | // The numeric multiplier for the node scores that the prioritize call generates. 35 | // The weight should be a positive integer 36 | Weight int `json:"weight,omitempty"` 37 | // EnableHttps specifies whether https should be used to communicate with the extender 38 | EnableHttps bool `json:"enableHttps,omitempty"` 39 | // TLSConfig specifies the transport layer security config 40 | TLSConfig *client.TLSClientConfig `json:"tlsConfig,omitempty"` 41 | // HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize 42 | // timeout is ignored, k8s/other extenders priorities are used to select the node. 43 | HTTPTimeout time.Duration `json:"httpTimeout,omitempty"` 44 | } 45 | ``` 46 | 47 | A sample scheduler policy file with extender configuration: 48 | 49 | ```json 50 | { 51 | "predicates": [ 52 | { 53 | "name": "HostName" 54 | }, 55 | { 56 | "name": "MatchNodeSelector" 57 | }, 58 | { 59 | "name": "PodFitsResources" 60 | } 61 | ], 62 | "priorities": [ 63 | { 64 | "name": "LeastRequestedPriority", 65 | "weight": 1 66 | } 67 | ], 68 | "extenders": [ 69 | { 70 | "urlPrefix": "http://127.0.0.1:12345/api/scheduler", 71 | "filterVerb": "filter", 72 | "enableHttps": false 73 | } 74 | ] 75 | } 76 | ``` 77 | 78 | Arguments passed to the FilterVerb endpoint on the extender are the set of nodes 79 | filtered through the k8s predicates and the pod. Arguments passed to the 80 | PrioritizeVerb endpoint on the extender are the set of nodes filtered through 81 | the k8s predicates and extender predicates and the pod. 82 | 83 | ```go 84 | // ExtenderArgs represents the arguments needed by the extender to filter/prioritize 85 | // nodes for a pod. 86 | type ExtenderArgs struct { 87 | // Pod being scheduled 88 | Pod api.Pod `json:"pod"` 89 | // List of candidate nodes where the pod can be scheduled 90 | Nodes api.NodeList `json:"nodes"` 91 | } 92 | ``` 93 | 94 | The "filter" call returns a list of nodes (schedulerapi.ExtenderFilterResult). The "prioritize" call 95 | returns priorities for each node (schedulerapi.HostPriorityList). 96 | 97 | The "filter" call may prune the set of nodes based on its predicates. Scores 98 | returned by the "prioritize" call are added to the k8s scores (computed through 99 | its priority functions) and used for final host selection. 100 | 101 | Multiple extenders can be configured in the scheduler policy. 102 | 103 | 104 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/scheduler_extender.md?pixel)]() 105 | 106 | -------------------------------------------------------------------------------- /contributors/devel/pull-requests.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | - [Pull Request Process](#pull-request-process) 4 | - [Life of a Pull Request](#life-of-a-pull-request) 5 | - [Before sending a pull request](#before-sending-a-pull-request) 6 | - [Release Notes](#release-notes) 7 | - [Reviewing pre-release notes](#reviewing-pre-release-notes) 8 | - [Visual overview](#visual-overview) 9 | - [Other notes](#other-notes) 10 | - [Automation](#automation) 11 | 12 | 13 | 14 | # Pull Request Process 15 | 16 | An overview of how pull requests are managed for kubernetes. This document 17 | assumes the reader has already followed the [development guide](development.md) 18 | to set up their environment. 19 | 20 | # Life of a Pull Request 21 | 22 | Unless in the last few weeks of a milestone when we need to reduce churn and stabilize, we aim to be always accepting pull requests. 23 | 24 | Either the [on call](on-call-rotations.md) manually or the [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) submit-queue plugin automatically will manage merging PRs. 25 | 26 | There are several requirements for the submit-queue to work: 27 | * Author must have signed CLA ("cla: yes" label added to PR) 28 | * No changes can be made since last lgtm label was applied 29 | * k8s-bot must have reported the GCE E2E build and test steps passed (Jenkins unit/integration, Jenkins e2e) 30 | 31 | Additionally, for infrequent or new contributors, we require the on call to apply the "ok-to-merge" label manually. This is gated by the [whitelist](https://github.com/kubernetes/contrib/blob/master/mungegithub/whitelist.txt). 32 | 33 | ## Before sending a pull request 34 | 35 | The following will save time for both you and your reviewer: 36 | 37 | * Enable [pre-commit hooks](development.md#committing-changes-to-your-fork) and verify they pass. 38 | * Verify `make verify` passes. 39 | * Verify `make test` passes. 40 | * Verify `make test-integration` passes. 41 | 42 | ## Release Notes 43 | 44 | This section applies only to pull requests on the master branch. 45 | For cherry-pick PRs, see the [Cherrypick instructions](cherry-picks.md) 46 | 47 | 1. All pull requests are initiated with a `release-note-label-needed` label. 48 | 1. For a PR to be ready to merge, the `release-note-label-needed` label must be removed and one of the other `release-note-*` labels must be added. 49 | 1. `release-note-none` is a valid option if the PR does not need to be mentioned 50 | at release time. 51 | 1. `release-note` labeled PRs generate a release note using the PR title by 52 | default OR the release-note block in the PR template if filled in. 53 | * See the [PR template](../../.github/PULL_REQUEST_TEMPLATE.md) for more 54 | details. 55 | * PR titles and body comments are mutable and can be modified at any time 56 | prior to the release to reflect a release note friendly message. 57 | 58 | The only exception to these rules is when a PR is not a cherry-pick and is 59 | targeted directly to the non-master branch. In this case, a `release-note-*` 60 | label is required for that non-master PR. 61 | 62 | ### Reviewing pre-release notes 63 | 64 | At any time, you can see what the release notes will look like on any branch. 65 | (NOTE: This only works on Linux for now) 66 | 67 | ``` 68 | $ git pull https://github.com/kubernetes/release 69 | $ RELNOTES=$PWD/release/relnotes 70 | $ cd /to/your/kubernetes/repo 71 | $ $RELNOTES -man # for details on how to use the tool 72 | # Show release notes from the last release on a branch to HEAD 73 | $ $RELNOTES --branch=master 74 | ``` 75 | 76 | ## Visual overview 77 | 78 | ![PR workflow](pr_workflow.png) 79 | 80 | # Other notes 81 | 82 | Pull requests that are purely support questions will be closed and 83 | redirected to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). 84 | We do this to consolidate help/support questions into a single channel, 85 | improve efficiency in responding to requests and make FAQs easier 86 | to find. 87 | 88 | Pull requests older than 2 weeks will be closed. Exceptions can be made 89 | for PRs that have active review comments, or that are awaiting other dependent PRs. 90 | Closed pull requests are easy to recreate, and little work is lost by closing a pull 91 | request that subsequently needs to be reopened. We want to limit the total number of PRs in flight to: 92 | * Maintain a clean project 93 | * Remove old PRs that would be difficult to rebase as the underlying code has changed over time 94 | * Encourage code velocity 95 | 96 | 97 | # Automation 98 | 99 | We use a variety of automation to manage pull requests. This automation is described in detail 100 | [elsewhere.](automation.md) 101 | 102 | 103 | 104 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() 105 | 106 | -------------------------------------------------------------------------------- /contributors/devel/automation.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Development Automation 2 | 3 | ## Overview 4 | 5 | Kubernetes uses a variety of automated tools in an attempt to relieve developers 6 | of repetitive, low brain power work. This document attempts to describe these 7 | processes. 8 | 9 | 10 | ## Submit Queue 11 | 12 | In an effort to 13 | * reduce load on core developers 14 | * maintain e2e stability 15 | * load test github's label feature 16 | 17 | We have added an automated [submit-queue] 18 | (https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/submit-queue.go) 19 | to the 20 | [github "munger"](https://github.com/kubernetes/contrib/tree/master/mungegithub) 21 | for kubernetes. 22 | 23 | The submit-queue does the following: 24 | 25 | ```go 26 | for _, pr := range readyToMergePRs() { 27 | if testsAreStable() { 28 | if retestPR(pr) == success { 29 | mergePR(pr) 30 | } 31 | } 32 | } 33 | ``` 34 | 35 | The status of the submit-queue is [online.](http://submit-queue.k8s.io/) 36 | 37 | ### Ready to merge status 38 | 39 | The submit-queue lists what it believes are required on the [merge requirements tab](http://submit-queue.k8s.io/#/info) of the info page. That may be more up to date. 40 | 41 | A PR is considered "ready for merging" if it matches the following: 42 | * The PR must have the label "cla: yes" or "cla: human-approved" 43 | * The PR must be mergeable. aka cannot need a rebase 44 | * All of the following github statuses must be green 45 | * Jenkins GCE Node e2e 46 | * Jenkins GCE e2e 47 | * Jenkins unit/integration 48 | * The PR cannot have any prohibited future milestones (such as a v1.5 milestone during v1.4 code freeze) 49 | * The PR must have the "lgtm" label. The "lgtm" label is automatically applied 50 | following a review comment consisting of only "LGTM" (case-insensitive) 51 | * The PR must not have been updated since the "lgtm" label was applied 52 | * The PR must not have the "do-not-merge" label 53 | 54 | ### Merge process 55 | 56 | Merges _only_ occur when the [critical builds](http://submit-queue.k8s.io/#/e2e) 57 | are passing. We're open to including more builds here, let us know... 58 | 59 | Merges are serialized, so only a single PR is merged at a time, to ensure 60 | against races. 61 | 62 | If the PR has the `retest-not-required` label, it is simply merged. If the PR does 63 | not have this label the e2e, unit/integration, and node tests are re-run. If these 64 | tests pass a second time, the PR will be merged as long as the `critical builds` are 65 | green when this PR finishes retesting. 66 | 67 | ## Github Munger 68 | 69 | We run [github "mungers"](https://github.com/kubernetes/contrib/tree/master/mungegithub). 70 | 71 | This runs repeatedly over github pulls and issues and runs modular "mungers" 72 | similar to "mungedocs." The mungers include the 'submit-queue' referenced above along 73 | with numerous other functions. See the README in the link above. 74 | 75 | Please feel free to unleash your creativity on this tool, send us new mungers 76 | that you think will help support the Kubernetes development process. 77 | 78 | ### Closing stale pull-requests 79 | 80 | Github Munger will close pull-requests that don't have human activity in the 81 | last 90 days. It will warn about this process 60 days before closing the 82 | pull-request, and warn again 30 days later. One way to prevent this from 83 | happening is to add the "keep-open" label on the pull-request. 84 | 85 | Feel free to re-open and maybe add the "keep-open" label if this happens to a 86 | valid pull-request. It may also be a good opportunity to get more attention by 87 | verifying that it is properly assigned and/or mention people that might be 88 | interested. Commenting on the pull-request will also keep it open for another 90 89 | days. 90 | 91 | ## PR builder 92 | 93 | We also run a robotic PR builder that attempts to run tests for each PR. 94 | 95 | Before a PR from an unknown user is run, the PR builder bot (`k8s-bot`) asks to 96 | a message from a contributor that a PR is "ok to test", the contributor replies 97 | with that message. ("please" is optional, but remember to treat your robots with 98 | kindness...) 99 | 100 | ## FAQ: 101 | 102 | #### How can I ask my PR to be tested again for Jenkins failures? 103 | 104 | PRs should only need to be manually re-tested if you believe there was a flake 105 | during the original test. All flakes should be filed as an 106 | [issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Fflake). 107 | Once you find or file a flake a contributer (this may be you!) should request 108 | a retest with "@k8s-bot test this issue: #NNNNN", where NNNNN is replaced with 109 | the issue number you found or filed. 110 | 111 | Any pushes of new code to the PR will automatically trigger a new test. No human 112 | interraction is required. 113 | 114 | 115 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/automation.md?pixel)]() 116 | 117 | -------------------------------------------------------------------------------- /contributors/devel/owners.md: -------------------------------------------------------------------------------- 1 | # Owners files 2 | 3 | _Note_: This is a design for a feature that is not yet implemented. See the [contrib PR](https://github.com/kubernetes/contrib/issues/1389) for the current progress. 4 | 5 | ## Overview 6 | 7 | We want to establish owners for different parts of the code in the Kubernetes codebase. These owners 8 | will serve as the approvers for code to be submitted to these parts of the repository. Notably, owners 9 | are not necessarily expected to do the first code review for all commits to these areas, but they are 10 | required to approve changes before they can be merged. 11 | 12 | **Note** The Kubernetes project has a hiatus on adding new approvers to OWNERS files. At this time we are [adding more reviewers](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr%20%22Curating%20owners%3A%22%20) to take the load off of the current set of approvers and once we have had a chance to flush this out for a release we will begin adding new approvers again. Adding new approvers is planned for after the Kubernetes 1.6.0 release. 13 | 14 | ## High Level flow 15 | 16 | ### Step One: A PR is submitted 17 | 18 | After a PR is submitted, the automated kubernetes PR robot will append a message to the PR indicating the owners 19 | that are required for the PR to be submitted. 20 | 21 | Subsequently, a user can also request the approval message from the robot by writing: 22 | 23 | ``` 24 | @k8s-bot approvers 25 | ``` 26 | 27 | into a comment. 28 | 29 | In either case, the automation replies with an annotation that indicates 30 | the owners required to approve. The annotation is a comment that is applied to the PR. 31 | This comment will say: 32 | 33 | ``` 34 | Approval is required from OR , AND OR , AND ... 35 | ``` 36 | 37 | The set of required owners is drawn from the OWNERS files in the repository (see below). For each file 38 | there should be multiple different OWNERS, these owners are listed in the `OR` clause(s). Because 39 | it is possible that a PR may cover different directories, with disjoint sets of OWNERS, a PR may require 40 | approval from more than one person, this is where the `AND` clauses come from. 41 | 42 | `` should be the github user id of the owner _without_ a leading `@` symbol to prevent the owner 43 | from being cc'd into the PR by email. 44 | 45 | ### Step Two: A PR is LGTM'd 46 | 47 | Once a PR is reviewed and LGTM'd it is eligible for submission. However, for it to be submitted 48 | an owner for all of the files changed in the PR have to 'approve' the PR. A user is an owner for a 49 | file if they are included in the OWNERS hierarchy (see below) for that file. 50 | 51 | Owner approval comes in two forms: 52 | 53 | * An owner adds a comment to the PR saying "I approve" or "approved" 54 | * An owner is the original author of the PR 55 | 56 | In the case of a comment based approval, the same rules as for the 'lgtm' label apply. If the PR is 57 | changed by pushing new commits to the PR, the previous approval is invalidated, and the owner(s) must 58 | approve again. Because of this is recommended that PR authors squash their PRs prior to getting approval 59 | from owners. 60 | 61 | ### Step Three: A PR is merged 62 | 63 | Once a PR is LGTM'd and all required owners have approved, it is eligible for merge. The merge bot takes care of 64 | the actual merging. 65 | 66 | ## Design details 67 | 68 | We need to build new features into the existing github munger in order to accomplish this. Additionally 69 | we need to add owners files to the repository. 70 | 71 | ### Approval Munger 72 | 73 | We need to add a munger that adds comments to PRs indicating whose approval they require. This munger will 74 | look for PRs that do not have approvers already present in the comments, or where approvers have been 75 | requested, and add an appropriate comment to the PR. 76 | 77 | 78 | ### Status Munger 79 | 80 | GitHub has a [status api](https://developer.github.com/v3/repos/statuses/), we will add a status munger that pushes a status onto a PR of approval status. This status will only be approved if the relevant 81 | approvers have approved the PR. 82 | 83 | ### Requiring approval status 84 | 85 | Github has the ability to [require status checks prior to merging](https://help.github.com/articles/enabling-required-status-checks/) 86 | 87 | Once we have the status check munger described above implemented, we will add this required status check 88 | to our main branch as well as any release branches. 89 | 90 | ### Adding owners files 91 | 92 | In each directory in the repository we may add an OWNERS file. This file will contain the github OWNERS 93 | for that directory. OWNERSHIP is hierarchical, so if a directory does not container an OWNERS file, its 94 | parent's OWNERS file is used instead. There will be a top-level OWNERS file to back-stop the system. 95 | 96 | Obviously changing the OWNERS file requires OWNERS permission. 97 | 98 | 99 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/owners.md?pixel)]() 100 | 101 | -------------------------------------------------------------------------------- /contributors/design-proposals/initial-resources.md: -------------------------------------------------------------------------------- 1 | ## Abstract 2 | 3 | Initial Resources is a data-driven feature that based on historical data tries to estimate resource usage of a container without Resources specified 4 | and set them before the container is run. This document describes design of the component. 5 | 6 | ## Motivation 7 | 8 | Since we want to make Kubernetes as simple as possible for its users we don't want to require setting [Resources](../design/resource-qos.md) for container by its owner. 9 | On the other hand having Resources filled is critical for scheduling decisions. 10 | Current solution to set up Resources to hardcoded value has obvious drawbacks. 11 | We need to implement a component which will set initial Resources to a reasonable value. 12 | 13 | ## Design 14 | 15 | InitialResources component will be implemented as an [admission plugin](../../plugin/pkg/admission/) and invoked right before 16 | [LimitRanger](https://github.com/kubernetes/kubernetes/blob/7c9bbef96ed7f2a192a1318aa312919b861aee00/cluster/gce/config-default.sh#L91). 17 | For every container without Resources specified it will try to predict amount of resources that should be sufficient for it. 18 | So that a pod without specified resources will be treated as 19 | . 20 | 21 | InitialResources will set only [request](../design/resource-qos.md#requests-and-limits) (independently for each resource type: cpu, memory) field in the first version to avoid killing containers due to OOM (however the container still may be killed if exceeds requested resources). 22 | To make the component work with LimitRanger the estimated value will be capped by min and max possible values if defined. 23 | It will prevent from situation when the pod is rejected due to too low or too high estimation. 24 | 25 | The container won't be marked as managed by this component in any way, however appropriate event will be exported. 26 | The predicting algorithm should have very low latency to not increase significantly e2e pod startup latency 27 | [#3954](https://github.com/kubernetes/kubernetes/pull/3954). 28 | 29 | ### Predicting algorithm details 30 | 31 | In the first version estimation will be made based on historical data for the Docker image being run in the container (both the name and the tag matters). 32 | CPU/memory usage of each container is exported periodically (by default with 1 minute resolution) to the backend (see more in [Monitoring pipeline](#monitoring-pipeline)). 33 | 34 | InitialResources will set Request for both cpu/mem as the 90th percentile of the first (in the following order) set of samples defined in the following way: 35 | 36 | * 7 days same image:tag, assuming there is at least 60 samples (1 hour) 37 | * 30 days same image:tag, assuming there is at least 60 samples (1 hour) 38 | * 30 days same image, assuming there is at least 1 sample 39 | 40 | If there is still no data the default value will be set by LimitRanger. Same parameters will be configurable with appropriate flags. 41 | 42 | #### Example 43 | 44 | If we have at least 60 samples from image:tag over the past 7 days, we will use the 90th percentile of all of the samples of image:tag over the past 7 days. 45 | Otherwise, if we have at least 60 samples from image:tag over the past 30 days, we will use the 90th percentile of all of the samples over of image:tag the past 30 days. 46 | Otherwise, if we have at least 1 sample from image over the past 30 days, we will use that the 90th percentile of all of the samples of image over the past 30 days. 47 | Otherwise we will use default value. 48 | 49 | ### Monitoring pipeline 50 | 51 | In the first version there will be available 2 options for backend for predicting algorithm: 52 | 53 | * [InfluxDB](../../docs/user-guide/monitoring.md#influxdb-and-grafana) - aggregation will be made in SQL query 54 | * [GCM](../../docs/user-guide/monitoring.md#google-cloud-monitoring) - since GCM is not as powerful as InfluxDB some aggregation will be made on the client side 55 | 56 | Both will be hidden under an abstraction layer, so it would be easy to add another option. 57 | The code will be a part of Initial Resources component to not block development, however in the future it should be a part of Heapster. 58 | 59 | 60 | ## Next steps 61 | 62 | The first version will be quite simple so there is a lot of possible improvements. Some of them seem to have high priority 63 | and should be introduced shortly after the first version is done: 64 | 65 | * observe OOM and then react to it by increasing estimation 66 | * add possibility to specify if estimation should be made, possibly as ```InitialResourcesPolicy``` with options: *always*, *if-not-set*, *never* 67 | * add other features to the model like *namespace* 68 | * remember predefined values for the most popular images like *mysql*, *nginx*, *redis*, etc. 69 | * dry mode, which allows to ask system for resource recommendation for a container without running it 70 | * add estimation as annotations for those containers that already has resources set 71 | * support for other data sources like [Hawkular](http://www.hawkular.org/) 72 | 73 | 74 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/initial-resources.md?pixel)]() 75 | 76 | -------------------------------------------------------------------------------- /contributors/design-proposals/service-discovery.md: -------------------------------------------------------------------------------- 1 | # Service Discovery Proposal 2 | 3 | ## Goal of this document 4 | 5 | To consume a service, a developer needs to know the full URL and a description of the API. Kubernetes contains the host and port information of a service, but it lacks the scheme and the path information needed if the service is not bound at the root. In this document we propose some standard kubernetes service annotations to fix these gaps. It is important that these annotations are a standard to allow for standard service discovery across Kubernetes implementations. Note that the example largely speaks to consuming WebServices but that the same concepts apply to other types of services. 6 | 7 | ## Endpoint URL, Service Type 8 | 9 | A URL can accurately describe the location of a Service. A generic URL is of the following form 10 | 11 | scheme:[//[user:password@]host[:port]][/]path[?query][#fragment] 12 | 13 | however for the purpose of service discovery we can simplify this to the following form 14 | 15 | scheme:[//host[:port]][/]path 16 | 17 | If a user and/or password is required then this information can be passed using Kubernetes Secrets. Kubernetes contains the host and port of each service but it lacks the scheme and path. 18 | 19 | `Service Path` - Every Service has one (or more) endpoint. As a rule the endpoint should be located at the root "/" of the location URL, i.e. `http://172.100.1.52/`. There are cases where this is not possible and the actual service endpoint could be located at `http://172.100.1.52/cxfcdi`. The Kubernetes metadata for a service does not capture the path part, making it hard to consume this service. 20 | 21 | `Service Scheme` - Services can be deployed using different schemes. Some popular schemes include `http`,`https`,`file`,`ftp` and `jdbc`. 22 | 23 | `Service Protocol` - Services use different protocols that clients need to speak in order to communicate with the service, some examples of service level protocols are SOAP, REST (Yes, technically REST isn't a protocol but an architectural style). For service consumers it can be hard to tell what protocol is expected. 24 | 25 | ## Service Description 26 | 27 | The API of a service is the point of interaction with a service consumer. The description of the API is an essential piece of information at creation time of the service consumer. It has become common to publish a service definition document on a know location on the service itself. This 'well known' place it not very standard, so it is proposed the service developer provides the service description path and the type of Definition Language (DL) used. 28 | 29 | `Service Description Path` - To facilitate the consumption of the service by client, the location this document would be greatly helpful to the service consumer. In some cases the client side code can be generated from such a document. It is assumed that the service description document is published somewhere on the service endpoint itself. 30 | 31 | `Service Description Language` - A number of Definition Languages (DL) have been developed to describe the service. Some of examples are `WSDL`, `WADL` and `Swagger`. In order to consume a description document it is good to know the type of DL used. 32 | 33 | ## Standard Service Annotations 34 | 35 | Kubernetes allows the creation of Service Annotations. Here we propose the use of the following standard annotations 36 | 37 | * `api.service.kubernetes.io/path` - the path part of the service endpoint url. An example value could be `cxfcdi`, 38 | * `api.service.kubernetes.io/scheme` - the scheme part of the service endpoint url. Some values could be `http` or `https`. 39 | * `api.service.kubernetes.io/protocol` - the protocol of the service. Known values are `SOAP`, `XML-RPC` and `REST`, 40 | * `api.service.kubernetes.io/description-path` - the path part of the service description document's endpoint. It is a pretty safe assumption that the service self-documents. An example value for a swagger 2.0 document can be `cxfcdi/swagger.json`, 41 | * `api.kubernetes.io/description-language` - the type of Description Language used. Known values are `WSDL`, `WADL`, `SwaggerJSON`, `SwaggerYAML`. 42 | 43 | The fragment below is taken from the service section of the kubernetes.json were these annotations are used 44 | 45 | ... 46 | "objects" : [ { 47 | "apiVersion" : "v1", 48 | "kind" : "Service", 49 | "metadata" : { 50 | "annotations" : { 51 | "api.service.kubernetes.io/protocol" : "REST", 52 | "api.service.kubernetes.io/scheme" "http", 53 | "api.service.kubernetes.io/path" : "cxfcdi", 54 | "api.service.kubernetes.io/description-path" : "cxfcdi/swagger.json", 55 | "api.service.kubernetes.io/description-language" : "SwaggerJSON" 56 | }, 57 | ... 58 | 59 | ## Conclusion 60 | 61 | Five service annotations are proposed as a standard way to describe a service endpoint. These five annotation are promoted as a Kubernetes standard, so that services can be discovered and a service catalog can be build to facilitate service consumers. 62 | 63 | 64 | 65 | 66 | 67 | 68 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/service-discovery.md?pixel)]() 69 | 70 | -------------------------------------------------------------------------------- /contributors/design-proposals/principles.md: -------------------------------------------------------------------------------- 1 | # Design Principles 2 | 3 | Principles to follow when extending Kubernetes. 4 | 5 | ## API 6 | 7 | See also the [API conventions](../devel/api-conventions.md). 8 | 9 | * All APIs should be declarative. 10 | * API objects should be complementary and composable, not opaque wrappers. 11 | * The control plane should be transparent -- there are no hidden internal APIs. 12 | * The cost of API operations should be proportional to the number of objects 13 | intentionally operated upon. Therefore, common filtered lookups must be indexed. 14 | Beware of patterns of multiple API calls that would incur quadratic behavior. 15 | * Object status must be 100% reconstructable by observation. Any history kept 16 | must be just an optimization and not required for correct operation. 17 | * Cluster-wide invariants are difficult to enforce correctly. Try not to add 18 | them. If you must have them, don't enforce them atomically in master components, 19 | that is contention-prone and doesn't provide a recovery path in the case of a 20 | bug allowing the invariant to be violated. Instead, provide a series of checks 21 | to reduce the probability of a violation, and make every component involved able 22 | to recover from an invariant violation. 23 | * Low-level APIs should be designed for control by higher-level systems. 24 | Higher-level APIs should be intent-oriented (think SLOs) rather than 25 | implementation-oriented (think control knobs). 26 | 27 | ## Control logic 28 | 29 | * Functionality must be *level-based*, meaning the system must operate correctly 30 | given the desired state and the current/observed state, regardless of how many 31 | intermediate state updates may have been missed. Edge-triggered behavior must be 32 | just an optimization. 33 | * Assume an open world: continually verify assumptions and gracefully adapt to 34 | external events and/or actors. Example: we allow users to kill pods under 35 | control of a replication controller; it just replaces them. 36 | * Do not define comprehensive state machines for objects with behaviors 37 | associated with state transitions and/or "assumed" states that cannot be 38 | ascertained by observation. 39 | * Don't assume a component's decisions will not be overridden or rejected, nor 40 | for the component to always understand why. For example, etcd may reject writes. 41 | Kubelet may reject pods. The scheduler may not be able to schedule pods. Retry, 42 | but back off and/or make alternative decisions. 43 | * Components should be self-healing. For example, if you must keep some state 44 | (e.g., cache) the content needs to be periodically refreshed, so that if an item 45 | does get erroneously stored or a deletion event is missed etc, it will be soon 46 | fixed, ideally on timescales that are shorter than what will attract attention 47 | from humans. 48 | * Component behavior should degrade gracefully. Prioritize actions so that the 49 | most important activities can continue to function even when overloaded and/or 50 | in states of partial failure. 51 | 52 | ## Architecture 53 | 54 | * Only the apiserver should communicate with etcd/store, and not other 55 | components (scheduler, kubelet, etc.). 56 | * Compromising a single node shouldn't compromise the cluster. 57 | * Components should continue to do what they were last told in the absence of 58 | new instructions (e.g., due to network partition or component outage). 59 | * All components should keep all relevant state in memory all the time. The 60 | apiserver should write through to etcd/store, other components should write 61 | through to the apiserver, and they should watch for updates made by other 62 | clients. 63 | * Watch is preferred over polling. 64 | 65 | ## Extensibility 66 | 67 | TODO: pluggability 68 | 69 | ## Bootstrapping 70 | 71 | * [Self-hosting](http://issue.k8s.io/246) of all components is a goal. 72 | * Minimize the number of dependencies, particularly those required for 73 | steady-state operation. 74 | * Stratify the dependencies that remain via principled layering. 75 | * Break any circular dependencies by converting hard dependencies to soft 76 | dependencies. 77 | * Also accept that data from other components from another source, such as 78 | local files, which can then be manually populated at bootstrap time and then 79 | continuously updated once those other components are available. 80 | * State should be rediscoverable and/or reconstructable. 81 | * Make it easy to run temporary, bootstrap instances of all components in 82 | order to create the runtime state needed to run the components in the steady 83 | state; use a lock (master election for distributed components, file lock for 84 | local components like Kubelet) to coordinate handoff. We call this technique 85 | "pivoting". 86 | * Have a solution to restart dead components. For distributed components, 87 | replication works well. For local components such as Kubelet, a process manager 88 | or even a simple shell loop works. 89 | 90 | ## Availability 91 | 92 | TODO 93 | 94 | ## General principles 95 | 96 | * [Eric Raymond's 17 UNIX rules](https://en.wikipedia.org/wiki/Unix_philosophy#Eric_Raymond.E2.80.99s_17_Unix_Rules) 97 | 98 | 99 | 100 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/principles.md?pixel)]() 101 | 102 | --------------------------------------------------------------------------------