├── .gitignore ├── CONTRIBUTING.md ├── Dockerfile ├── Makefile ├── README.md ├── build_helper.sh ├── discovery.png ├── glide.lock ├── glide.yaml └── src ├── config └── config.go ├── controller └── controller.go ├── handlers ├── clusterdiscoveryhandler.go └── handler.go ├── log └── log.go ├── main └── main.go └── utils ├── map.go └── utils.go /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | dist 3 | vendor 4 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Contributing to k8s-endpoints-sync-controller 4 | 5 | The k8s-endpoints-sync-controller project team welcomes contributions from the community. Before you start working with k8s-endpoints-sync-controller, please read our [Developer Certificate of Origin](https://cla.vmware.com/dco). All contributions to this repository must be signed as described on that page. Your signature certifies that you wrote the patch or have the right to pass it on as an open-source patch. 6 | 7 | ## Community 8 | 9 | ## Getting Started 10 | 11 | ## Contribution Flow 12 | 13 | This is a rough outline of what a contributor's workflow looks like: 14 | 15 | - Create a topic branch from where you want to base your work 16 | - Make commits of logical units 17 | - Make sure your commit messages are in the proper format (see below) 18 | - Push your changes to a topic branch in your fork of the repository 19 | - Submit a pull request 20 | 21 | Example: 22 | 23 | ``` shell 24 | git remote add upstream https://github.com/vmware/k8s-endpoints-sync-controller.git 25 | git checkout -b my-new-feature master 26 | git commit -a 27 | git push origin my-new-feature 28 | ``` 29 | 30 | ### Staying In Sync With Upstream 31 | 32 | When your branch gets out of sync with the vmware/master branch, use the following to update: 33 | 34 | ``` shell 35 | git checkout my-new-feature 36 | git fetch -a 37 | git pull --rebase upstream master 38 | git push --force-with-lease origin my-new-feature 39 | ``` 40 | 41 | ### Updating pull requests 42 | 43 | If your PR fails to pass CI or needs changes based on code review, you'll most likely want to squash these changes into 44 | existing commits. 45 | 46 | If your pull request contains a single commit or your changes are related to the most recent commit, you can simply 47 | amend the commit. 48 | 49 | ``` shell 50 | git add . 51 | git commit --amend 52 | git push --force-with-lease origin my-new-feature 53 | ``` 54 | 55 | If you need to squash changes into an earlier commit, you can use: 56 | 57 | ``` shell 58 | git add . 59 | git commit --fixup 60 | git rebase -i --autosquash master 61 | git push --force-with-lease origin my-new-feature 62 | ``` 63 | 64 | Be sure to add a comment to the PR indicating your new changes are ready to review, as GitHub does not generate a 65 | notification when you git push. 66 | 67 | ### Code Style 68 | 69 | ### Formatting Commit Messages 70 | 71 | We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). 72 | 73 | Be sure to include any related GitHub issue references in the commit message. See 74 | [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues 75 | and commits. 76 | 77 | ## Reporting Bugs and Creating Issues 78 | 79 | When opening a new issue, try to roughly follow the commit message format conventions above. 80 | 81 | ## Repository Structure 82 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT 3 | 4 | FROM vmware/photon 5 | ADD dist/k8s-endpoints-sync-controller /k8s-endpoints-sync-controller 6 | RUN chmod +x /k8s-endpoints-sync-controller 7 | CMD "/k8s-endpoints-sync-controller" 8 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | ## Copyright ? 2018 VMware, Inc. All Rights Reserved. 2 | ## SPDX-License-Identifier: BSD-2-Clause 3 | 4 | GO ?= go 5 | GOVERSION ?= go1.9.2 6 | SHELL := /bin/bash 7 | 8 | .DEFAULT_GOAL := build 9 | 10 | .PHONY: goversion 11 | goversion: ## Checks if installed go version is latest 12 | @echo Checking go version... 13 | @( $(GO) version | grep -q $(GOVERSION) ) || ( echo "Please install $(GOVERSION) (found: $$($(GO) version))" && exit 1 ) 14 | 15 | .PHONY: build 16 | build: ## Generates the binary 17 | ./build_helper.sh build 18 | 19 | .PHONY: buildimage 20 | buildimage: ## builds the docker image 21 | ./build_helper.sh buildimage $$TAG 22 | 23 | PHONY: pushimage 24 | pushimage: ## push docker image to registry 25 | ./build_helper.sh pushimage $$TAG $$EXT_TAG 26 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # k8s-endpoints-sync-controller 4 | 5 | ## Overview 6 | This controller deployed on each connected Kubernetes cluster replicates the Kubernetes service and endpoints objects across clusters so that services can be discovered and enables communication across clusters using Kubernetes service names. 7 | 8 | The communication across clusters relies on kube-proxy to update the iptable rules on each node as the controller creates/updates the API objects. 9 | 10 | ### Talks 11 | lightning talk at SRECON Asia 2019. https://www.usenix.org/sites/default/files/conference/protected-files/srecon19apac_slides_lightning.pdf#page=23 12 | 13 | ### Prerequisites 14 | 15 | * All the clusters should have different clusterCIDR. 16 | * All the clusters should be connected so that there is Pod to Pod connectivity across clusters. This can be achieved using 17 | 1) VPN across clusters with L3 routing if using Kubenet network plugin. 18 | 2) On AWS, VPC peering between two EKS clusters 19 | 3) on GKE, https://istio.io/docs/examples/multicluster/gke/ 20 | 4) on IBM cloud, https://istio.io/docs/examples/multicluster/icp/ 21 | * The Kubernetes API Server of every cluster should be reachable to other clusters. 22 | 23 | ### Build & Run 24 | 25 | 1. Install Go 1.9 or higher version 26 | 2. Install Glide 27 | 3. checkout the project onto the GOPATH 28 | 4. run *glide up* -> to import all the dependencies 29 | 3. run *make build* -> to build the binary 30 | 4. run *make buildimage TAG=* -> to build the Docker image 31 | 32 | The executable expects kubeconfig files of the clusters to connect mounted at /etc/kubeconfigs to run in the cluster. \ 33 | The following environment variables can be set 34 | 1. NSTOWATCH - Array of namespaces in which services and endpoints objects will be watched and replicated. (Default: all) 35 | 2. EXCLUDE - Array of namespaces in which objects will not be replicated. (Default: ) 36 | 37 | 38 | ## Documentation 39 | 40 | Assuming the pod IP addresses are routable across clusters, the goal is to enable communication through K8s service objects i.e. App A in region A should talk to app B in region B using app B's K8s service name and vice-versa. 41 | This is achieved by creating in cluster A: 42 | 1. app B service object (headed/headless) without pod selectors 43 | 2. endpoints object with endpoints as IP addresses of app B pods in cluster B. 44 | 45 | This enables kube-proxy in cluster A to load balance requests on the service name of app B to app B's pods. 46 | 47 | ![cross-cluster service discovery example](discovery.png) 48 | 49 | ### Annotations for Service Migration 50 | The controller provides annotation features for the service teams to migrate services across clusters with no downtime. 51 | The following describes how to use these annotations when migrating a service from source cluster to target cluster. 52 | 53 | **Annotation Key: vmware.com/syndicate-mode** \ 54 | **Annoration Values: {source, receiver, singular}** 55 | 56 | Before migration the service is replicated from source cluster to target cluster i.e the service obj in the source cluster will have the selector but the replicated service obj in the target cluster will not have selector and the endpoints obj in that cluster is maintained by the controller. After migration, the service is replicated from target cluster to source cluster. 57 | 58 | ##### Migrating K8s service obj with selector for stateful services 59 | After deploying new pods in target cluster and completing the data migration, 60 | 1. Add annotation 'receiver' in the source cluster. This should update the service obj in the target cluster with annotation 'source'. Also, the controller will remove the selector from the service in the source cluster and the replication will now happen from target→source cluster. 61 | 2. Update the service obj in the target cluster with right selector. 62 | 63 | ##### Migrating K8s service obj with selector for stateless services 64 | After deploying new pods in target cluster, 65 | 1. Add annotation 'union' in the source cluster. This will remove the selector from the service obj in source cluster and updates endpoints object in both clusters with union of pod ipaddresses( old ips + new ips). This ensures that the request for the service will be served by any of the pod in both clusters. 66 | 2. Update the service obj in target cluster with 'source' annotation. This should update the service obj in the source cluster with annotation 'receiver' and the replication will now happen from target→source cluster. 67 | 3. Update the service obj in the target cluster with right selector if needed. 68 | 69 | ##### Stop replicating K8s service & endpoints object 70 | 1. Update the service obj in any cluster with annotation 'singular'. This will stop replicating that service and will remove replicated svc obj and endpoints obj. 71 | Creating service obj in any cluster with annotation 'singular' will also not create replicated objects. 72 | 73 | ## Releases & Major Branches 74 | 75 | ## Contributing 76 | 77 | The k8s-endpoints-sync-controller project team welcomes contributions from the community. Before you start working with k8s-endpoints-sync-controller, please read our [Developer Certificate of Origin](https://cla.vmware.com/dco). All contributions to this repository must be signed as described on that page. Your signature certifies that you wrote the patch or have the right to pass it on as an open-source patch. For more detailed information, refer to [CONTRIBUTING.md](CONTRIBUTING.md). 78 | 79 | ## License 80 | 81 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 82 | 83 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 84 | 85 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 86 | -------------------------------------------------------------------------------- /build_helper.sh: -------------------------------------------------------------------------------- 1 | # Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | # SPDX-License-Identifier: BSD-2-Clause 3 | 4 | #!/usr/bin/env bash 5 | 6 | scriptpath=`dirname $0` 7 | srcpath=`dirname $scriptpath` 8 | echo $scriptpath 9 | cd $srcpath 10 | 11 | echo "Building from:" 12 | pwd 13 | echo "With GOPATH:" 14 | echo $GOPATH 15 | go version 16 | go env 17 | 18 | if [ "$1" = "build" ]; then 19 | echo "==> building k8s-endpoints-sync-controller binary" 20 | [ -e ./dist/k8s-endpoints-sync-controller ] && rm ./dist/k8s-endpoints-sync-controller 21 | CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o ./dist/k8s-endpoints-sync-controller src/main/main.go 22 | echo "==> Results:" 23 | echo "==>./dist" 24 | ls ./dist/k8s-endpoints-sync-controller 25 | exit 26 | fi 27 | 28 | if [ "$1" = "buildimage" ]; then 29 | echo "==> building k8s-endpoints-sync-controller docker image" 30 | echo "tag: $2" 31 | docker build -t $2 -f Dockerfile . 32 | fi 33 | 34 | if [ "$1" = "pushimage" ]; then 35 | echo "==> pushing k8s-endpoints-sync-controller docker image $2 to registry" 36 | tag=$2 37 | ext_tag=$3 38 | docker tag $tag $ext_tag 39 | docker push $ext_tag 40 | fi 41 | -------------------------------------------------------------------------------- /discovery.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/k8s-endpoints-sync-controller/660d881f0d356de2a625e4d78f2f6b2bb9c08cd4/discovery.png -------------------------------------------------------------------------------- /glide.lock: -------------------------------------------------------------------------------- 1 | hash: d7af598598c02c99b8fc0fc2697c3b34501e38097dbe2af3db8cae46314607be 2 | updated: 2018-08-09T12:57:58.628435777-07:00 3 | imports: 4 | - name: github.com/davecgh/go-spew 5 | version: 782f4967f2dc4564575ca782fe2d04090b5faca8 6 | subpackages: 7 | - spew 8 | - name: github.com/emicklei/go-restful 9 | version: ff4f55a206334ef123e4f79bbf348980da81ca46 10 | subpackages: 11 | - log 12 | - name: github.com/ghodss/yaml 13 | version: 73d445a93680fa1a78ae23a5839bad48f32ba1ee 14 | - name: github.com/go-openapi/jsonpointer 15 | version: 46af16f9f7b149af66e5d1bd010e3574dc06de98 16 | - name: github.com/go-openapi/jsonreference 17 | version: 13c6e3589ad90f49bd3e3bbe2c2cb3d7a4142272 18 | - name: github.com/go-openapi/spec 19 | version: 7abd5745472fff5eb3685386d5fb8bf38683154d 20 | - name: github.com/go-openapi/swag 21 | version: f3f9494671f93fcff853e3c6e9e948b3eb71e590 22 | - name: github.com/gogo/protobuf 23 | version: c0656edd0d9eab7c66d1eb0c568f9039345796f7 24 | subpackages: 25 | - proto 26 | - sortkeys 27 | - name: github.com/golang/glog 28 | version: 44145f04b68cf362d9c4df2182967c2275eaefed 29 | - name: github.com/golang/protobuf 30 | version: 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 31 | subpackages: 32 | - proto 33 | - ptypes 34 | - ptypes/any 35 | - ptypes/duration 36 | - ptypes/timestamp 37 | - name: github.com/google/btree 38 | version: 7d79101e329e5a3adf994758c578dab82b90c017 39 | - name: github.com/google/gofuzz 40 | version: 44d81051d367757e1c7c6a5a86423ece9afcf63c 41 | - name: github.com/googleapis/gnostic 42 | version: 0c5108395e2debce0d731cf0287ddf7242066aba 43 | subpackages: 44 | - OpenAPIv2 45 | - compiler 46 | - extensions 47 | - name: github.com/gregjones/httpcache 48 | version: 787624de3eb7bd915c329cba748687a3b22666a6 49 | subpackages: 50 | - diskcache 51 | - name: github.com/hashicorp/golang-lru 52 | version: a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4 53 | subpackages: 54 | - simplelru 55 | - name: github.com/howeyc/gopass 56 | version: bf9dde6d0d2c004a008c27aaee91170c786f6db8 57 | - name: github.com/imdario/mergo 58 | version: 6633656539c1639d9d78127b7d47c622b5d7b6dc 59 | - name: github.com/json-iterator/go 60 | version: 36b14963da70d11297d313183d7e6388c8510e1e 61 | - name: github.com/juju/ratelimit 62 | version: 5b9ff866471762aa2ab2dced63c9fb6f53921342 63 | - name: github.com/mailru/easyjson 64 | version: 2f5df55504ebc322e4d52d34df6a1f5b503bf26d 65 | subpackages: 66 | - buffer 67 | - jlexer 68 | - jwriter 69 | - name: github.com/peterbourgon/diskv 70 | version: 5f041e8faa004a95c88a202771f4cc3e991971e6 71 | - name: github.com/PuerkitoBio/purell 72 | version: 8a290539e2e8629dbc4e6bad948158f790ec31f4 73 | - name: github.com/PuerkitoBio/urlesc 74 | version: 5bd2802263f21d8788851d5305584c82a5c75d7e 75 | - name: github.com/spf13/pflag 76 | version: 9ff6c6923cfffbcd502984b8e0c80539a94968b7 77 | - name: go.uber.org/atomic 78 | version: 1ea20fb1cbb1cc08cbd0d913a96dead89aa18289 79 | - name: go.uber.org/multierr 80 | version: 3c4937480c32f4c13a875a1829af76c98ca3d40a 81 | - name: go.uber.org/zap 82 | version: 35aad584952c3e7020db7b839f6b102de6271f89 83 | subpackages: 84 | - buffer 85 | - internal/bufferpool 86 | - internal/color 87 | - internal/exit 88 | - zapcore 89 | - name: golang.org/x/crypto 90 | version: 81e90905daefcd6fd217b62423c0908922eadb30 91 | subpackages: 92 | - ssh/terminal 93 | - name: golang.org/x/net 94 | version: 1c05540f6879653db88113bc4a2b70aec4bd491f 95 | subpackages: 96 | - context 97 | - http2 98 | - http2/hpack 99 | - idna 100 | - lex/httplex 101 | - name: golang.org/x/sys 102 | version: 95c6576299259db960f6c5b9b69ea52422860fce 103 | subpackages: 104 | - unix 105 | - windows 106 | - name: golang.org/x/text 107 | version: b19bf474d317b857955b12035d2c5acb57ce8b01 108 | subpackages: 109 | - cases 110 | - internal 111 | - internal/tag 112 | - language 113 | - runes 114 | - secure/bidirule 115 | - secure/precis 116 | - transform 117 | - unicode/bidi 118 | - unicode/norm 119 | - width 120 | - name: gopkg.in/inf.v0 121 | version: 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4 122 | - name: gopkg.in/yaml.v2 123 | version: 53feefa2559fb8dfa8d81baad31be332c97d6c77 124 | - name: k8s.io/api 125 | version: 11147472b7c934c474a2c484af3c0c5210b7a3af 126 | subpackages: 127 | - admissionregistration/v1alpha1 128 | - admissionregistration/v1beta1 129 | - apps/v1 130 | - apps/v1beta1 131 | - apps/v1beta2 132 | - authentication/v1 133 | - authentication/v1beta1 134 | - authorization/v1 135 | - authorization/v1beta1 136 | - autoscaling/v1 137 | - autoscaling/v2beta1 138 | - batch/v1 139 | - batch/v1beta1 140 | - batch/v2alpha1 141 | - certificates/v1beta1 142 | - core/v1 143 | - events/v1beta1 144 | - extensions/v1beta1 145 | - imagepolicy/v1alpha1 146 | - networking/v1 147 | - policy/v1beta1 148 | - rbac/v1 149 | - rbac/v1alpha1 150 | - rbac/v1beta1 151 | - scheduling/v1alpha1 152 | - settings/v1alpha1 153 | - storage/v1 154 | - storage/v1alpha1 155 | - storage/v1beta1 156 | - name: k8s.io/apimachinery 157 | version: 180eddb345a5be3a157cea1c624700ad5bd27b8f 158 | subpackages: 159 | - pkg/api/errors 160 | - pkg/api/meta 161 | - pkg/api/resource 162 | - pkg/apis/meta/internalversion 163 | - pkg/apis/meta/v1 164 | - pkg/apis/meta/v1/unstructured 165 | - pkg/apis/meta/v1alpha1 166 | - pkg/conversion 167 | - pkg/conversion/queryparams 168 | - pkg/fields 169 | - pkg/labels 170 | - pkg/runtime 171 | - pkg/runtime/schema 172 | - pkg/runtime/serializer 173 | - pkg/runtime/serializer/json 174 | - pkg/runtime/serializer/protobuf 175 | - pkg/runtime/serializer/recognizer 176 | - pkg/runtime/serializer/streaming 177 | - pkg/runtime/serializer/versioning 178 | - pkg/selection 179 | - pkg/types 180 | - pkg/util/cache 181 | - pkg/util/clock 182 | - pkg/util/diff 183 | - pkg/util/errors 184 | - pkg/util/framer 185 | - pkg/util/intstr 186 | - pkg/util/json 187 | - pkg/util/net 188 | - pkg/util/runtime 189 | - pkg/util/sets 190 | - pkg/util/validation 191 | - pkg/util/validation/field 192 | - pkg/util/wait 193 | - pkg/util/yaml 194 | - pkg/version 195 | - pkg/watch 196 | - third_party/forked/golang/reflect 197 | - name: k8s.io/client-go 198 | version: 78700dec6369ba22221b72770783300f143df150 199 | subpackages: 200 | - discovery 201 | - informers/core/v1 202 | - informers/internalinterfaces 203 | - kubernetes 204 | - kubernetes/scheme 205 | - kubernetes/typed/admissionregistration/v1alpha1 206 | - kubernetes/typed/admissionregistration/v1beta1 207 | - kubernetes/typed/apps/v1 208 | - kubernetes/typed/apps/v1beta1 209 | - kubernetes/typed/apps/v1beta2 210 | - kubernetes/typed/authentication/v1 211 | - kubernetes/typed/authentication/v1beta1 212 | - kubernetes/typed/authorization/v1 213 | - kubernetes/typed/authorization/v1beta1 214 | - kubernetes/typed/autoscaling/v1 215 | - kubernetes/typed/autoscaling/v2beta1 216 | - kubernetes/typed/batch/v1 217 | - kubernetes/typed/batch/v1beta1 218 | - kubernetes/typed/batch/v2alpha1 219 | - kubernetes/typed/certificates/v1beta1 220 | - kubernetes/typed/core/v1 221 | - kubernetes/typed/events/v1beta1 222 | - kubernetes/typed/extensions/v1beta1 223 | - kubernetes/typed/networking/v1 224 | - kubernetes/typed/policy/v1beta1 225 | - kubernetes/typed/rbac/v1 226 | - kubernetes/typed/rbac/v1alpha1 227 | - kubernetes/typed/rbac/v1beta1 228 | - kubernetes/typed/scheduling/v1alpha1 229 | - kubernetes/typed/settings/v1alpha1 230 | - kubernetes/typed/storage/v1 231 | - kubernetes/typed/storage/v1alpha1 232 | - kubernetes/typed/storage/v1beta1 233 | - listers/core/v1 234 | - pkg/version 235 | - rest 236 | - rest/watch 237 | - tools/auth 238 | - tools/cache 239 | - tools/clientcmd 240 | - tools/clientcmd/api 241 | - tools/clientcmd/api/latest 242 | - tools/clientcmd/api/v1 243 | - tools/metrics 244 | - tools/pager 245 | - tools/reference 246 | - transport 247 | - util/buffer 248 | - util/cert 249 | - util/flowcontrol 250 | - util/homedir 251 | - util/integer 252 | - name: k8s.io/kube-openapi 253 | version: 39a7bf85c140f972372c2a0d1ee40adbf0c8bfe1 254 | subpackages: 255 | - pkg/common 256 | testImports: [] 257 | -------------------------------------------------------------------------------- /glide.yaml: -------------------------------------------------------------------------------- 1 | # Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | # SPDX-License-Identifier: MIT 3 | 4 | package: github.com/vmware/k8s-endpoints-sync-controller 5 | import: 6 | - package: k8s.io/client-go 7 | version: v6.0.0 8 | - package: go.uber.org/zap 9 | version: v1.7.1 10 | -------------------------------------------------------------------------------- /src/config/config.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package config 5 | 6 | import ( 7 | "time" 8 | ) 9 | 10 | type Config struct { 11 | ClustersToWatch []string 12 | ClusterToApply string 13 | NamespaceToWatch string 14 | NamespacesToExclude []string 15 | ReplicatedLabelVal string 16 | WatchNamespaces bool 17 | WatchEndpoints bool 18 | WatchServices bool 19 | ResyncPeriod time.Duration 20 | } 21 | 22 | const REPLICATED_LABEL_KEY = "replicated" 23 | const KUBERNETES = "kubernetes" 24 | const SVC_ANNOTATION_SYNDICATE_KEY = "vmware.com/syndicate-mode" 25 | const SVC_ANNOTATION_UNION = "union" 26 | const SVC_ANNOTATION_SOURCE = "source" 27 | const SVC_ANNOTATION_RECEIVER = "receiver" 28 | const SVC_ANNOTATION_SINGULAR = "singular" 29 | -------------------------------------------------------------------------------- /src/controller/controller.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package controller 5 | 6 | import ( 7 | c "github.com/vmware/k8s-endpoints-sync-controller/src/config" 8 | "github.com/vmware/k8s-endpoints-sync-controller/src/handlers" 9 | log "github.com/vmware/k8s-endpoints-sync-controller/src/log" 10 | "k8s.io/api/core/v1" 11 | "k8s.io/apimachinery/pkg/util/wait" 12 | informercorev1 "k8s.io/client-go/informers/core/v1" 13 | "k8s.io/client-go/kubernetes" 14 | "k8s.io/client-go/tools/cache" 15 | "k8s.io/client-go/tools/clientcmd" 16 | ) 17 | 18 | func StartController(kubeconfigPath string, eventHandler handlers.Handler, config *c.Config) error { 19 | kubeClient, err := getkubeclient(kubeconfigPath) 20 | if err != nil { 21 | return err 22 | } 23 | if config.WatchNamespaces { 24 | watchNamespaces(kubeClient, eventHandler, config) 25 | } 26 | if config.WatchEndpoints { 27 | watchEndpoints(kubeClient, eventHandler, config) 28 | } 29 | if config.WatchServices { 30 | watchServices(kubeClient, eventHandler, config) 31 | } 32 | return nil 33 | } 34 | 35 | func getkubeclient(kubeconfigPath string) (*kubernetes.Clientset, error) { 36 | config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath) 37 | log.Infof("building kubeclient") 38 | if err != nil { 39 | log.Errorf("Error with kubeconfig %s", err) 40 | return nil, err 41 | } 42 | clientset, err := kubernetes.NewForConfig(config) 43 | if err != nil { 44 | return nil, err 45 | } 46 | return clientset, nil 47 | } 48 | 49 | func watchNamespaces(client *kubernetes.Clientset, eventHandler handlers.Handler, config *c.Config) cache.Store { 50 | 51 | indexers := cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc} 52 | informer := informercorev1.NewNamespaceInformer(client, 0, indexers) 53 | 54 | informer.AddEventHandler( 55 | cache.ResourceEventHandlerFuncs{ 56 | AddFunc: eventHandler.ObjectCreated, 57 | UpdateFunc: eventHandler.ObjectUpdated, 58 | DeleteFunc: eventHandler.ObjectDeleted, 59 | }, 60 | ) 61 | go informer.Run(wait.NeverStop) 62 | log.Infof("Waiting for namespaces to be synced") 63 | cache.WaitForCacheSync(wait.NeverStop, informer.HasSynced) 64 | log.Infof("synced namespaces") 65 | 66 | return nil 67 | } 68 | 69 | func watchEndpoints(client *kubernetes.Clientset, eventHandler handlers.Handler, config *c.Config) cache.Store { 70 | 71 | indexers := cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc} 72 | informer := informercorev1.NewEndpointsInformer(client, v1.NamespaceAll, config.ResyncPeriod, indexers) 73 | 74 | informer.AddEventHandler( 75 | cache.ResourceEventHandlerFuncs{ 76 | AddFunc: eventHandler.ObjectCreated, 77 | UpdateFunc: eventHandler.ObjectUpdated, 78 | DeleteFunc: eventHandler.ObjectDeleted, 79 | }, 80 | ) 81 | go informer.Run(wait.NeverStop) 82 | return nil 83 | } 84 | 85 | func watchServices(client *kubernetes.Clientset, eventHandler handlers.Handler, config *c.Config) cache.Store { 86 | indexers := cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc} 87 | informer := informercorev1.NewServiceInformer(client, v1.NamespaceAll, config.ResyncPeriod, indexers) 88 | 89 | informer.AddEventHandler( 90 | cache.ResourceEventHandlerFuncs{ 91 | AddFunc: eventHandler.ObjectCreated, 92 | UpdateFunc: eventHandler.ObjectUpdated, 93 | DeleteFunc: eventHandler.ObjectDeleted, 94 | }, 95 | ) 96 | go informer.Run(wait.NeverStop) 97 | return nil 98 | } 99 | -------------------------------------------------------------------------------- /src/handlers/clusterdiscoveryhandler.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package handlers 5 | 6 | import ( 7 | c "github.com/vmware/k8s-endpoints-sync-controller/src/config" 8 | "github.com/vmware/k8s-endpoints-sync-controller/src/log" 9 | "github.com/vmware/k8s-endpoints-sync-controller/src/utils" 10 | v1 "k8s.io/api/core/v1" 11 | meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" 12 | "k8s.io/client-go/kubernetes" 13 | "k8s.io/client-go/rest" 14 | "strings" 15 | ) 16 | 17 | type ClusterDiscoveryHandler struct { 18 | kubeclient *kubernetes.Clientset 19 | label string 20 | config *c.Config 21 | replicatedNamespaces *utils.ConcurrentMap 22 | createHandler HandlerFunc 23 | updateHandler HandlerFunc 24 | deleteHandler HandlerFunc 25 | } 26 | 27 | type HandlerFunc struct { 28 | handle func(obj interface{}) 29 | } 30 | 31 | func (s *ClusterDiscoveryHandler) Init(conf *c.Config) error { 32 | config, configErr := rest.InClusterConfig() 33 | if configErr != nil { 34 | log.Errorf("Error fetching incluster config %s", configErr) 35 | return configErr 36 | } 37 | kubeclient, err := kubernetes.NewForConfig(config) 38 | if err != nil { 39 | log.Errorf("Error creating client with inclusterConfig, %s", err) 40 | return err 41 | } 42 | s.kubeclient = kubeclient 43 | s.config = conf 44 | s.replicatedNamespaces = utils.NewConcurrentMap() 45 | s.prepareCreateHandler() 46 | s.prepareUpdateHandler() 47 | s.prepareDeleteHandler() 48 | return nil 49 | } 50 | 51 | func (s *ClusterDiscoveryHandler) prepareCreateHandler() { 52 | s.createHandler = HandlerFunc{ 53 | handle: func(obj interface{}) { 54 | switch v := obj.(type) { 55 | case *v1.Namespace: 56 | s.handleNamespaceCreate(v) 57 | case *v1.Endpoints: 58 | s.handleEnpointCreateOrUpdate(v) 59 | case *v1.Service: 60 | s.handleServiceCreate(v, false) 61 | } 62 | }, 63 | } 64 | } 65 | 66 | func (s *ClusterDiscoveryHandler) prepareUpdateHandler() { 67 | s.updateHandler = HandlerFunc{ 68 | handle: func(obj interface{}) { 69 | switch v := obj.(type) { 70 | case *v1.Namespace: 71 | s.handleNamespaceUpdate(v) 72 | case *v1.Endpoints: 73 | s.handleEnpointCreateOrUpdate(v) 74 | case *v1.Service: 75 | s.handleServiceUpdate(v) 76 | } 77 | }, 78 | } 79 | } 80 | 81 | func (s *ClusterDiscoveryHandler) prepareDeleteHandler() { 82 | s.deleteHandler = HandlerFunc{ 83 | handle: func(obj interface{}) { 84 | switch v := obj.(type) { 85 | case *v1.Namespace: 86 | s.handleNamespaceDelete(v) 87 | case *v1.Endpoints: 88 | s.handleEnpointDelete(v) 89 | case *v1.Service: 90 | s.handleServiceDelete(v) 91 | } 92 | }, 93 | } 94 | } 95 | 96 | func (s *ClusterDiscoveryHandler) ObjectCreated(obj interface{}) { 97 | if s.shouldProcessEvent(obj) { 98 | s.handleEvent(obj, s.createHandler) 99 | } 100 | } 101 | 102 | func (s *ClusterDiscoveryHandler) handleEvent(obj interface{}, handler HandlerFunc) { 103 | handler.handle(obj) 104 | } 105 | 106 | func (s *ClusterDiscoveryHandler) ObjectDeleted(obj interface{}) { 107 | if s.shouldProcessEvent(obj) { 108 | s.handleEvent(obj, s.deleteHandler) 109 | } 110 | } 111 | 112 | func (s *ClusterDiscoveryHandler) ObjectUpdated(oldObj, newObj interface{}) { 113 | if s.shouldProcessEvent(newObj) { 114 | s.handleEvent(newObj, s.updateHandler) 115 | } 116 | } 117 | 118 | func (s *ClusterDiscoveryHandler) handleEnpointCreateOrUpdate(endpoints *v1.Endpoints) { 119 | log.Debugf("updating endpoints %s namespace %s", endpoints.Name, endpoints.Namespace) 120 | /*b, _ := json.MarshalIndent(endpoints, "", " ") 121 | fmt.Println("In endpoint before update :", string(b))*/ 122 | var endpointsToApply v1.Endpoints 123 | clusterCIDR := "" 124 | syndicate_ep := false 125 | if strings.HasSuffix(endpoints.Name, "-syndicate") || strings.HasSuffix(endpoints.SelfLink, "-syndicate") { 126 | endpoints.Name = strings.TrimSuffix(endpoints.Name, "-syndicate") 127 | syndicate_ep = true 128 | } 129 | endpointsToApply.Name = endpoints.Name 130 | endpointsToApply.Labels = endpoints.Labels 131 | if endpointsToApply.Labels == nil { 132 | endpointsToApply.Labels = map[string]string{} 133 | } 134 | endpointsToApply.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 135 | 136 | for _, v := range endpoints.Subsets { 137 | var endpointset v1.EndpointSubset 138 | for _, address := range v.Addresses { 139 | if address.IP != "" { 140 | if clusterCIDR == "" { 141 | clusterCIDR = address.IP[0:6] 142 | } 143 | endpointAddress := v1.EndpointAddress{IP: address.IP} 144 | if address.Hostname != "" { 145 | endpointAddress.Hostname = address.Hostname 146 | } 147 | endpointset.Addresses = append(endpointset.Addresses, endpointAddress) 148 | } 149 | } 150 | if len(endpointset.Addresses) > 0 { 151 | for _, port := range v.Ports { 152 | endpointPort := v1.EndpointPort{Name: port.Name, Port: port.Port, Protocol: port.Protocol} 153 | endpointset.Ports = append(endpointset.Ports, endpointPort) 154 | } 155 | endpointsToApply.Subsets = append(endpointsToApply.Subsets, endpointset) 156 | } 157 | } 158 | unionSvcEndpoint, singularSvcEndpoint := s.checkIfUnionorSingularSvcEndpoint(endpoints) 159 | if singularSvcEndpoint { 160 | return 161 | } 162 | existingEndpoints, _ := s.kubeclient.CoreV1().Endpoints(endpoints.Namespace).Get(endpoints.Name, meta_v1.GetOptions{}) 163 | if existingEndpoints != nil && existingEndpoints.Name == "" { 164 | if _, eErr := s.kubeclient.CoreV1().Endpoints(endpoints.Namespace).Create(&endpointsToApply); eErr != nil { 165 | log.Errorf("Error creating endpoint %s", eErr) 166 | return 167 | } 168 | } else { 169 | if !syndicate_ep && unionSvcEndpoint { 170 | if !s.changeInEndpoints(existingEndpoints, &endpointsToApply) { 171 | log.Infof("No change in endpoints %s namespace %s", existingEndpoints.Name, existingEndpoints.Namespace) 172 | return 173 | } 174 | } else if syndicate_ep { 175 | for _, v := range existingEndpoints.Subsets { 176 | var endpointset v1.EndpointSubset 177 | for _, address := range v.Addresses { 178 | if clusterCIDR != "" { 179 | if !strings.HasPrefix(address.IP, clusterCIDR) { 180 | endpointAddress := v1.EndpointAddress{IP: address.IP} 181 | if address.Hostname != "" { 182 | endpointAddress.Hostname = address.Hostname 183 | } 184 | endpointset.Addresses = append(endpointset.Addresses, endpointAddress) 185 | } 186 | } else { 187 | endpointAddress := v1.EndpointAddress{IP: address.IP} 188 | if address.Hostname != "" { 189 | endpointAddress.Hostname = address.Hostname 190 | } 191 | endpointset.Addresses = append(endpointset.Addresses, endpointAddress) 192 | } 193 | } 194 | if len(endpointset.Addresses) > 0 { 195 | for _, port := range v.Ports { 196 | endpointPort := v1.EndpointPort{Name: port.Name, Port: port.Port, Protocol: port.Protocol} 197 | endpointset.Ports = append(endpointset.Ports, endpointPort) 198 | } 199 | endpointsToApply.Subsets = append(endpointsToApply.Subsets, endpointset) 200 | } 201 | } 202 | } 203 | if unionSvcEndpoint { 204 | endpointsToApply.Labels[c.REPLICATED_LABEL_KEY] = "false" 205 | } 206 | if _, eErr := s.kubeclient.CoreV1().Endpoints(endpoints.Namespace).Update(&endpointsToApply); eErr != nil { 207 | log.Errorf("Error updating endpoint %s", eErr) 208 | return 209 | } 210 | } 211 | } 212 | 213 | func (s *ClusterDiscoveryHandler) changeInEndpoints(existingEndpoints *v1.Endpoints, endpointsToApply *v1.Endpoints) bool { 214 | ipmap := make(map[string]bool) 215 | for _, v := range existingEndpoints.Subsets { 216 | for _, address := range v.Addresses { 217 | ipmap[address.IP] = true 218 | } 219 | } 220 | count := 0 221 | for _, v := range endpointsToApply.Subsets { 222 | for _, address := range v.Addresses { 223 | if _, ok := ipmap[address.IP]; ok { 224 | count++ 225 | } else { 226 | return true 227 | } 228 | } 229 | } 230 | return count != len(ipmap) 231 | } 232 | 233 | func (s *ClusterDiscoveryHandler) handleServiceCreate(svc *v1.Service, syndicate_svc bool) { 234 | log.Infof("creating service %s, namespace %s", svc.Name, svc.Namespace) 235 | if syndicate_svc { 236 | svc.Name = svc.Name + "-syndicate" 237 | } 238 | existingService, _ := s.kubeclient.CoreV1().Services(svc.Namespace).Get(svc.Name, meta_v1.GetOptions{}) 239 | if existingService != nil && existingService.Name == "" { 240 | if svc.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR { 241 | return 242 | } 243 | service := v1.Service{} 244 | service.Name = svc.Name 245 | service.Namespace = svc.Namespace 246 | service.Spec.Ports = []v1.ServicePort{} 247 | service.Labels = svc.Labels 248 | if service.Labels == nil { 249 | service.Labels = map[string]string{} 250 | } 251 | if syndicate_svc { 252 | service.Spec.Selector = svc.Spec.Selector 253 | service.Labels[c.REPLICATED_LABEL_KEY] = "false" 254 | } else { 255 | service.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 256 | } 257 | for _, port := range svc.Spec.Ports { 258 | service.Spec.Ports = append(service.Spec.Ports, v1.ServicePort{Protocol: port.Protocol, Name: port.Name, Port: port.Port, TargetPort: port.TargetPort}) 259 | } 260 | if _, err := s.kubeclient.CoreV1().Services(svc.Namespace).Create(&service); err != nil { 261 | log.Errorf("Error creating service %s", err) 262 | return 263 | } 264 | } else { 265 | existingService.Spec.Ports = []v1.ServicePort{} 266 | for _, port := range svc.Spec.Ports { 267 | existingService.Spec.Ports = append(existingService.Spec.Ports, v1.ServicePort{Protocol: port.Protocol, Name: port.Name, Port: port.Port, TargetPort: port.TargetPort}) 268 | } 269 | existingService.Labels = svc.Labels 270 | if existingService.Labels == nil { 271 | existingService.Labels = map[string]string{} 272 | } 273 | if svc.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR { 274 | if existingService.Labels[c.REPLICATED_LABEL_KEY] == "true" && 275 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] != c.SVC_ANNOTATION_SINGULAR { 276 | s.handleServiceDelete(existingService) 277 | } 278 | return 279 | } 280 | existingService.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 281 | if _, err := s.kubeclient.CoreV1().Services(svc.Namespace).Update(existingService); err != nil { 282 | log.Errorf("Error updating service %s", err) 283 | return 284 | } 285 | } 286 | } 287 | 288 | func (s *ClusterDiscoveryHandler) handleServiceUpdate(service *v1.Service) { 289 | log.Infof("updating service %s namespace %s", service.Name, service.Namespace) 290 | 291 | existingService, err := s.kubeclient.CoreV1().Services(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 292 | if err != nil { 293 | log.Errorf("Error retrieving service obj, err %s", err) 294 | return 295 | } 296 | if service.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR { 297 | if existingService.Labels[c.REPLICATED_LABEL_KEY] == "true" && 298 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] != c.SVC_ANNOTATION_SINGULAR { 299 | s.handleServiceDelete(existingService) 300 | } 301 | return 302 | } 303 | if service.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_UNION { 304 | 305 | if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] != c.SVC_ANNOTATION_UNION { 306 | if existingService.Annotations == nil { 307 | existingService.Annotations = map[string]string{} 308 | } 309 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] = c.SVC_ANNOTATION_UNION 310 | if existingService.Labels == nil { 311 | existingService.Labels = map[string]string{} 312 | } 313 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "false" 314 | 315 | } else if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_UNION { 316 | if existingService.Labels == nil { 317 | existingService.Labels = map[string]string{} 318 | } 319 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "true" 320 | } 321 | existingEndpoints, _ := s.kubeclient.CoreV1().Endpoints(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 322 | if existingEndpoints.Labels == nil { 323 | existingEndpoints.Labels = map[string]string{} 324 | } 325 | existingEndpoints.Labels[c.REPLICATED_LABEL_KEY] = "false" 326 | existingEndpoints.ResourceVersion = "" 327 | if _, err := s.kubeclient.CoreV1().Endpoints(service.Namespace).Update(existingEndpoints); err != nil { 328 | log.Errorf("Error updating endpoints %s", err) 329 | return 330 | } 331 | s.handleServiceCreate(service, true) 332 | existingService.Spec.Selector = nil 333 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 334 | log.Errorf("Error updating service %s", err) 335 | return 336 | } 337 | return 338 | } 339 | 340 | if service.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SOURCE { 341 | if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] != c.SVC_ANNOTATION_RECEIVER { 342 | existingEndpoints, err := s.kubeclient.CoreV1().Endpoints(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 343 | if err != nil { 344 | log.Errorf("Error retrieving endpoints obj, err %s", err) 345 | return 346 | } 347 | if existingEndpoints.Labels == nil { 348 | existingEndpoints.Labels = map[string]string{} 349 | } 350 | existingEndpoints.Labels[c.REPLICATED_LABEL_KEY] = "false" 351 | existingEndpoints.ResourceVersion = "" 352 | if _, eErr := s.kubeclient.CoreV1().Endpoints(service.Namespace).Update(existingEndpoints); eErr != nil { 353 | log.Errorf("Error updating endpoint %s", eErr) 354 | return 355 | } 356 | if existingService.Labels == nil { 357 | existingService.Labels = map[string]string{} 358 | } 359 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "false" 360 | if existingService.Annotations == nil { 361 | existingService.Annotations = map[string]string{} 362 | } 363 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] = c.SVC_ANNOTATION_RECEIVER 364 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 365 | log.Errorf("Error updating service %s", err) 366 | return 367 | } 368 | service.Name = service.Name + "-syndicate" 369 | s.handleServiceDelete(service) 370 | return 371 | } else if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_RECEIVER { 372 | existingEndpoints, err := s.kubeclient.CoreV1().Endpoints(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 373 | if err != nil { 374 | log.Errorf("Error retrieving endpoints obj, err %s", err) 375 | return 376 | } 377 | if existingEndpoints.Labels == nil { 378 | existingEndpoints.Labels = map[string]string{} 379 | } 380 | existingEndpoints.Labels[c.REPLICATED_LABEL_KEY] = "true" 381 | existingEndpoints.ResourceVersion = "" 382 | if _, eErr := s.kubeclient.CoreV1().Endpoints(service.Namespace).Update(existingEndpoints); eErr != nil { 383 | log.Errorf("Error updating endpoint %s", eErr) 384 | return 385 | } 386 | if existingService.Labels == nil { 387 | existingService.Labels = map[string]string{} 388 | } 389 | existingService.Labels = service.Labels 390 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "true" 391 | existingService.Spec.Selector = nil 392 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 393 | log.Errorf("Error updating service %s", err) 394 | return 395 | } 396 | return 397 | } 398 | } 399 | 400 | if service.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_RECEIVER { 401 | 402 | if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] != c.SVC_ANNOTATION_SOURCE { 403 | 404 | existingEndpoints, err := s.kubeclient.CoreV1().Endpoints(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 405 | if err != nil { 406 | log.Errorf("Error retrieving endpoints obj, err %s", err) 407 | return 408 | } 409 | if existingEndpoints.Labels == nil { 410 | existingEndpoints.Labels = map[string]string{} 411 | } 412 | existingEndpoints.Labels[c.REPLICATED_LABEL_KEY] = "false" 413 | existingEndpoints.ResourceVersion = "" 414 | if _, eErr := s.kubeclient.CoreV1().Endpoints(service.Namespace).Update(existingEndpoints); eErr != nil { 415 | log.Errorf("Error updating endpoint %s", eErr) 416 | return 417 | } 418 | if existingService.Labels == nil { 419 | existingService.Labels = map[string]string{} 420 | } 421 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "false" 422 | if existingService.Annotations == nil { 423 | existingService.Annotations = map[string]string{} 424 | } 425 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] = c.SVC_ANNOTATION_SOURCE 426 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 427 | log.Errorf("Error updating service %s", err) 428 | return 429 | } 430 | service.Name = service.Name + "-syndicate" 431 | s.handleServiceDelete(service) 432 | return 433 | 434 | } 435 | 436 | SelectorForSvc := s.getSelectorfromSyndicateSvc(service) 437 | service.Name = service.Name + "-syndicate" 438 | s.handleServiceDelete(service) 439 | if SelectorForSvc != nil { 440 | existingService.Spec.Selector = SelectorForSvc 441 | } 442 | if existingService.Labels == nil { 443 | existingService.Labels = map[string]string{} 444 | } 445 | existingService.Labels[c.REPLICATED_LABEL_KEY] = "false" 446 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 447 | log.Errorf("Error updating service %s", err) 448 | return 449 | } 450 | existingEndpoints, err := s.kubeclient.CoreV1().Endpoints(service.Namespace).Get(service.Name, meta_v1.GetOptions{}) 451 | if err != nil { 452 | log.Errorf("Error retrieving endpoints obj, err %s", err) 453 | return 454 | } 455 | if existingEndpoints.Labels == nil { 456 | existingEndpoints.Labels = map[string]string{} 457 | } 458 | existingEndpoints.Labels[c.REPLICATED_LABEL_KEY] = "false" 459 | existingEndpoints.ResourceVersion = "" 460 | if _, eErr := s.kubeclient.CoreV1().Endpoints(service.Namespace).Update(existingEndpoints); eErr != nil { 461 | log.Errorf("Error updating endpoint %s", eErr) 462 | return 463 | } 464 | return 465 | } 466 | 467 | if existingService != nil && existingService.Name == "" { 468 | s.handleServiceCreate(service, false) 469 | return 470 | } 471 | 472 | existingService.Spec.Ports = []v1.ServicePort{} 473 | for _, port := range service.Spec.Ports { 474 | existingService.Spec.Ports = append(existingService.Spec.Ports, v1.ServicePort{Protocol: port.Protocol, Name: port.Name, Port: port.Port, TargetPort: port.TargetPort}) 475 | } 476 | existingService.Labels = service.Labels 477 | if existingService.Labels == nil { 478 | existingService.Labels = map[string]string{} 479 | } 480 | existingService.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 481 | if _, err := s.kubeclient.CoreV1().Services(service.Namespace).Update(existingService); err != nil { 482 | log.Errorf("Error updating service %s", err) 483 | return 484 | } 485 | } 486 | 487 | func (s *ClusterDiscoveryHandler) handleEnpointDelete(endpoints *v1.Endpoints) { 488 | log.Infof("deleting endpoints %s namespace %s", endpoints.Name, endpoints.Namespace) 489 | existingService, err := s.kubeclient.CoreV1().Services(endpoints.Namespace).Get(endpoints.Name, meta_v1.GetOptions{}) 490 | if err != nil { 491 | log.Errorf("Error retrieving service obj, err %s", err) 492 | return 493 | } 494 | if existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR { 495 | return 496 | } 497 | 498 | if eErr := s.kubeclient.CoreV1().Endpoints(endpoints.Namespace).Delete(endpoints.Name, &meta_v1.DeleteOptions{}); eErr != nil { 499 | log.Errorf("Error deleting endpoint %s", eErr) 500 | return 501 | } 502 | } 503 | 504 | func (s *ClusterDiscoveryHandler) handleServiceDelete(service *v1.Service) { 505 | log.Infof("deleting service %s namespace %s", service.Name, service.Namespace) 506 | if service.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR { 507 | return 508 | } 509 | if eErr := s.kubeclient.CoreV1().Services(service.Namespace).Delete(service.Name, &meta_v1.DeleteOptions{}); eErr != nil { 510 | log.Errorf("Error deleting service %v", eErr) 511 | return 512 | } 513 | } 514 | 515 | func (s *ClusterDiscoveryHandler) handleNamespaceCreate(n *v1.Namespace) { 516 | log.Infof("creating namespace %s", n.Name) 517 | existingNamespace, _ := s.kubeclient.CoreV1().Namespaces().Get(n.Name, meta_v1.GetOptions{}) 518 | 519 | if existingNamespace != nil && existingNamespace.Name == "" { 520 | ns := v1.Namespace{} 521 | ns.Name = n.Name 522 | ns.Labels = n.Labels 523 | if ns.Labels == nil { 524 | ns.Labels = map[string]string{} 525 | } 526 | ns.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 527 | if _, err := s.kubeclient.CoreV1().Namespaces().Create(&ns); err != nil { 528 | log.Errorf("Error creating namespace %v", err) 529 | return 530 | } 531 | } else { 532 | existingNamespace.Labels = n.Labels 533 | if existingNamespace.Labels == nil { 534 | existingNamespace.Labels = map[string]string{} 535 | } 536 | if n.Labels[c.REPLICATED_LABEL_KEY] == s.config.ReplicatedLabelVal { 537 | existingNamespace.Labels[c.REPLICATED_LABEL_KEY] = "false" 538 | } else { 539 | existingNamespace.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 540 | } 541 | if _, err := s.kubeclient.CoreV1().Namespaces().Update(existingNamespace); err != nil { 542 | log.Errorf("Error updating namespace %v", err) 543 | return 544 | } 545 | } 546 | s.replicatedNamespaces.Store(n.Name, true) 547 | } 548 | 549 | func (s *ClusterDiscoveryHandler) handleNamespaceUpdate(n *v1.Namespace) { 550 | log.Infof("updating namespace %s", n.Name) 551 | 552 | if s.replicatedNamespaces.Load(n.Name) { 553 | return 554 | } 555 | 556 | existingNamespace, _ := s.kubeclient.CoreV1().Namespaces().Get(n.Name, meta_v1.GetOptions{}) 557 | if existingNamespace != nil && existingNamespace.Name == "" { 558 | s.handleNamespaceCreate(n) 559 | return 560 | } 561 | 562 | existingNamespace.Labels = n.Labels 563 | if existingNamespace.Labels == nil { 564 | existingNamespace.Labels = map[string]string{} 565 | } 566 | if n.Labels[c.REPLICATED_LABEL_KEY] == s.config.ReplicatedLabelVal { 567 | existingNamespace.Labels[c.REPLICATED_LABEL_KEY] = "false" 568 | } else { 569 | existingNamespace.Labels[c.REPLICATED_LABEL_KEY] = s.config.ReplicatedLabelVal 570 | } 571 | if _, err := s.kubeclient.CoreV1().Namespaces().Update(existingNamespace); err != nil { 572 | log.Errorf("Error updating namespace %v", err) 573 | return 574 | } 575 | s.replicatedNamespaces.Store(n.Name, true) 576 | } 577 | 578 | func (s *ClusterDiscoveryHandler) handleNamespaceDelete(n *v1.Namespace) { 579 | 580 | log.Infof("deleting namespace %s", n.Name) 581 | if err := s.kubeclient.CoreV1().Namespaces().Delete(n.Name, &meta_v1.DeleteOptions{}); err != nil { 582 | log.Errorf("Error deleting namespace %v", err) 583 | return 584 | } 585 | s.replicatedNamespaces.Delete(n.Name) 586 | } 587 | 588 | func (s *ClusterDiscoveryHandler) checkIfReplicatedNamespace(namespace string, labels map[string]string) bool { 589 | 590 | if utils.ContainsKeyVal(labels, s.config.ReplicatedLabelVal) { 591 | if !s.replicatedNamespaces.Load(namespace) { 592 | s.replicatedNamespaces.Store(namespace, true) 593 | } 594 | return true 595 | } 596 | return false 597 | } 598 | 599 | func (s *ClusterDiscoveryHandler) getSelectorfromSyndicateSvc(service *v1.Service) map[string]string { 600 | existingService, err := s.kubeclient.CoreV1().Services(service.Namespace).Get(service.Name+"-syndicate", meta_v1.GetOptions{}) 601 | if err != nil { 602 | log.Errorf("Error retrieving service obj, err %v", err) 603 | return nil 604 | } 605 | return existingService.Spec.Selector 606 | } 607 | 608 | func (s *ClusterDiscoveryHandler) checkIfUnionorSingularSvcEndpoint(endpoints *v1.Endpoints) (bool, bool) { 609 | existingService, err := s.kubeclient.CoreV1().Services(endpoints.Namespace).Get(endpoints.Name, meta_v1.GetOptions{}) 610 | if err != nil { 611 | log.Errorf("Error retrieving service obj, err %v", err) 612 | return false, false 613 | } 614 | return existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_UNION, 615 | existingService.Annotations[c.SVC_ANNOTATION_SYNDICATE_KEY] == c.SVC_ANNOTATION_SINGULAR 616 | } 617 | 618 | func (s *ClusterDiscoveryHandler) shouldProcessEvent(obj interface{}) bool { 619 | switch v := obj.(type) { 620 | case *v1.Namespace: 621 | if s.checkIfReplicatedNamespace(v.Name, v.Labels) || utils.ContainsInArray(s.config.NamespacesToExclude, v.Name) || !utils.CanReplicateNamespace(v.Labels) { 622 | return false 623 | } 624 | return true 625 | case *v1.Endpoints: 626 | if utils.ContainsKeyVal(v.Labels, s.config.ReplicatedLabelVal) || !s.replicatedNamespaces.Load(v.Namespace) || v.Name == c.KUBERNETES { 627 | return false 628 | } 629 | return true 630 | case *v1.Service: 631 | if strings.HasSuffix(v.Name, "-syndicate") { 632 | return false 633 | } 634 | if utils.ContainsKeyVal(v.Labels, s.config.ReplicatedLabelVal) || !s.replicatedNamespaces.Load(v.Namespace) || v.Name == c.KUBERNETES { 635 | return false 636 | } 637 | return true 638 | } 639 | return false 640 | } 641 | -------------------------------------------------------------------------------- /src/handlers/handler.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package handlers 5 | 6 | type Handler interface { 7 | ObjectCreated(obj interface{}) 8 | ObjectDeleted(obj interface{}) 9 | ObjectUpdated(oldObj, newObj interface{}) 10 | } 11 | -------------------------------------------------------------------------------- /src/log/log.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package log 5 | 6 | import ( 7 | "fmt" 8 | "go.uber.org/zap" 9 | "go.uber.org/zap/zapcore" 10 | "time" 11 | ) 12 | 13 | var core zapcore.Core 14 | 15 | func Initialize() error { 16 | 17 | logger, _ := zap.NewProduction() 18 | 19 | cfg := zap.NewProductionConfig() 20 | cfg.OutputPaths = []string{ 21 | "/var/log/syndicate.log", 22 | "stderr", 23 | } 24 | cfg.ErrorOutputPaths = []string{ 25 | "/var/log/syndicate.log", 26 | "stderr", 27 | } 28 | logger, _ = cfg.Build() 29 | _ = zap.ReplaceGlobals(logger) 30 | _ = zap.RedirectStdLog(logger) 31 | core = logger.Core() 32 | return nil 33 | } 34 | 35 | func Infof(msg string, args ...interface{}) { 36 | if len(args) > 0 { 37 | msg = fmt.Sprintf(msg, args...) 38 | } 39 | e := zapcore.Entry{ 40 | Message: msg, 41 | Level: zapcore.InfoLevel, 42 | Time: time.Now().UTC(), 43 | LoggerName: "syndicate", 44 | } 45 | core.Write(e, nil) 46 | } 47 | 48 | func Debugf(msg string, args ...interface{}) { 49 | if len(args) > 0 { 50 | msg = fmt.Sprintf(msg, args...) 51 | } 52 | e := zapcore.Entry{ 53 | Message: msg, 54 | Level: zapcore.DebugLevel, 55 | Time: time.Now().UTC(), 56 | LoggerName: "syndicate", 57 | } 58 | core.Write(e, nil) 59 | } 60 | 61 | func Errorf(msg string, args ...interface{}) { 62 | if len(args) > 0 { 63 | msg = fmt.Sprintf(msg, args...) 64 | } 65 | e := zapcore.Entry{ 66 | Message: msg, 67 | Level: zapcore.ErrorLevel, 68 | Time: time.Now().UTC(), 69 | LoggerName: "syndicate", 70 | } 71 | core.Write(e, nil) 72 | } 73 | -------------------------------------------------------------------------------- /src/main/main.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package main 5 | 6 | import ( 7 | c "github.com/vmware/k8s-endpoints-sync-controller/src/config" 8 | cc "github.com/vmware/k8s-endpoints-sync-controller/src/controller" 9 | "github.com/vmware/k8s-endpoints-sync-controller/src/handlers" 10 | log "github.com/vmware/k8s-endpoints-sync-controller/src/log" 11 | "io/ioutil" 12 | "os" 13 | "os/signal" 14 | "strings" 15 | "syscall" 16 | "time" 17 | ) 18 | 19 | func main() { 20 | 21 | log.Initialize() 22 | log.Infof("Starting clusterdiscovery controller") 23 | config, err := loadConfig() 24 | if err != nil { 25 | return 26 | } 27 | 28 | handler := &handlers.ClusterDiscoveryHandler{} 29 | if handlerErr := handler.Init(config); handlerErr != nil { 30 | log.Errorf("failed to initialize handler %v", handlerErr) 31 | return 32 | } 33 | for _, cluster := range config.ClustersToWatch { 34 | 35 | go cc.StartController(cluster, handler, config) 36 | 37 | } 38 | 39 | sigterm := make(chan os.Signal, 1) 40 | signal.Notify(sigterm, syscall.SIGINT, syscall.SIGTERM) 41 | <-sigterm 42 | } 43 | 44 | func loadConfig() (*c.Config, error) { 45 | 46 | conf := &c.Config{} 47 | 48 | if n, nexists := os.LookupEnv("NSTOWATCH"); nexists { 49 | conf.NamespaceToWatch = n 50 | } else { 51 | conf.NamespaceToWatch = "" 52 | } 53 | 54 | if e, eexists := os.LookupEnv("EXCLUDE"); eexists { 55 | log.Infof("Namespaces to exclude %s", e) 56 | conf.NamespacesToExclude = strings.Split(e, ",") 57 | } 58 | 59 | searchDir := "/etc/kubeconfigs" 60 | 61 | files, err := ioutil.ReadDir(searchDir) 62 | if err != nil { 63 | log.Errorf("Error reading dir %v", err) 64 | return nil, err 65 | } 66 | 67 | for _, file := range files { 68 | if !file.IsDir() && !strings.Contains(file.Name(), "data") { 69 | log.Infof("Kubeconfig of cluster to watch %s", file.Name()) 70 | conf.ClustersToWatch = append(conf.ClustersToWatch, searchDir+"/"+file.Name()) 71 | } 72 | } 73 | 74 | conf.ReplicatedLabelVal = "true" 75 | 76 | conf.WatchNamespaces = true 77 | conf.WatchEndpoints = true 78 | conf.WatchServices = true 79 | conf.ResyncPeriod = 5 * time.Minute 80 | 81 | return conf, nil 82 | } 83 | -------------------------------------------------------------------------------- /src/utils/map.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package utils 5 | 6 | import ( 7 | "sync" 8 | ) 9 | 10 | type ConcurrentMap struct { 11 | sync.RWMutex 12 | internal map[string]bool 13 | } 14 | 15 | func NewConcurrentMap() *ConcurrentMap { 16 | return &ConcurrentMap{ 17 | internal: make(map[string]bool), 18 | } 19 | } 20 | 21 | func (rm *ConcurrentMap) Load(key string) (value bool) { 22 | rm.RLock() 23 | result, _ := rm.internal[key] 24 | rm.RUnlock() 25 | return result 26 | } 27 | 28 | func (rm *ConcurrentMap) Delete(key string) { 29 | rm.Lock() 30 | delete(rm.internal, key) 31 | rm.Unlock() 32 | } 33 | 34 | func (rm *ConcurrentMap) Store(key string, value bool) { 35 | rm.Lock() 36 | rm.internal[key] = value 37 | rm.Unlock() 38 | } 39 | -------------------------------------------------------------------------------- /src/utils/utils.go: -------------------------------------------------------------------------------- 1 | // Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | // SPDX-License-Identifier: MIT 3 | 4 | package utils 5 | 6 | import ( 7 | c "github.com/vmware/k8s-endpoints-sync-controller/src/config" 8 | ) 9 | 10 | func ContainsInArray(s []string, e string) bool { 11 | for _, a := range s { 12 | if a == e { 13 | return true 14 | } 15 | } 16 | return false 17 | } 18 | 19 | func CanReplicateNamespace(labels map[string]string) bool { 20 | if val, ok := labels[c.REPLICATED_LABEL_KEY]; ok { 21 | if val == "false" { 22 | return false 23 | } 24 | } 25 | return true 26 | } 27 | 28 | func ContainsKeyVal(labels map[string]string, val string) bool { 29 | if v, ok := labels[c.REPLICATED_LABEL_KEY]; ok { 30 | if v == val { 31 | return true 32 | } 33 | } 34 | return false 35 | } 36 | --------------------------------------------------------------------------------