├── .gitignore ├── LICENSE ├── Makefile ├── README.md ├── cmd └── lxdepot │ └── lxdepot.go ├── configs └── sample.yaml ├── go.mod ├── go.sum ├── internal ├── circularbuffer │ ├── circularbuffer.go │ └── circularbuffer_test.go ├── config │ └── config.go ├── dns │ ├── amazon.go │ ├── dns.go │ ├── dns_test.go │ └── google.go ├── handlers │ ├── handler_404.go │ ├── handler_containers.go │ ├── handler_hosts.go │ ├── handler_images.go │ ├── handler_root.go │ ├── handlers.go │ ├── router.go │ ├── templates.go │ └── ws │ │ ├── handler_containerplaybook.go │ │ ├── handler_createcontainer.go │ │ ├── handler_deletecontainer.go │ │ ├── handler_movecontainer.go │ │ ├── handler_startcontainer.go │ │ ├── handler_stopcontainer.go │ │ └── ws.go ├── lxd │ └── lxd.go └── utils │ └── convert.go ├── service-definitions └── systemd │ └── lxdepot.service └── web ├── static ├── css │ └── main.css └── favicon.ico └── templates ├── 404.tmpl ├── base.tmpl ├── container.tmpl ├── container_list.tmpl ├── container_new.tmpl ├── host_list.tmpl └── image_list.tmpl /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *.swp 3 | lxdepot 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2018, Brian Clapper 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | GO=$(shell which go) 2 | 3 | BINARY_NAME=lxdepot 4 | MAIN_GO_FILE=cmd/lxdepot/lxdepot.go 5 | 6 | build: 7 | $(GO) build -o $(BINARY_NAME) $(MAIN_GO_FILE) 8 | clean: 9 | $(GO) clean 10 | rm -f $(BINARY_NAME) 11 | install: 12 | mkdir -p /opt/lxdepot 13 | mkdir -p /opt/lxdepot/web 14 | mkdir -p /opt/lxdepot/configs 15 | mkdir -p /opt/lxdepot/bootstrap 16 | cp lxdepot /opt/lxdepot/ 17 | cp configs/sample.yaml /opt/lxdepot/configs/ 18 | rsync -aqc web/ /opt/lxdepot/web/ 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LXDepot 2 | 3 | LXDepot is a simple UI to interface with one to many LXD servers, allowing you to start, stop, create, and delete containers. 4 | 5 | Additionally, it can talk to third party DNS providers to automatically register and remove records, and bootstrap containers so once a user hits create, they can sit back for a moment and then be ready to SSH in and begin work. 6 | 7 | ## Usage 8 | 9 | LXDepot has only a few command line flags to keep things simple 10 | ``` 11 | -port (default:8080) which port to bind to 12 | -config (default:configs/config.yaml) instance config file 13 | -webroot (default:web/) where our templates + static files live 14 | -cache_templates (default:true) more for dev work, setting to false make the service read the web templates off disk each request 15 | ``` 16 | 17 | Ex. 18 | ``` 19 | ./lxdepot -port=8888 -config=/opt/lxdepot/configs/config.yaml -webroot=/opt/lxdepot/web/ 20 | ``` 21 | 22 | ## Config 23 | 24 | The config file controls PKI, the hosts we talk to, DNS configuration, and bootstrapping commands. A fully documented sample config can be found in [configs/sample.yaml](configs/sample.yaml) 25 | 26 | ## PKI 27 | 28 | To use this you need to create a client cert and key using openssl or similar. An example openssl command is: 29 | ``` 30 | openssl req -x509 -nodes -newkey rsa:4096 -keyout client.key -out client.crt -days 365 -subj '/CN=lxdepot' 31 | ``` 32 | 33 | This cert will then need added to all the LXD hosts you want to talk to. Put the client.crt on the host and then do: 34 | ``` 35 | lxc config trust add client.crt 36 | ``` 37 | 38 | Alter the commands as you see fit, these are only examples. 39 | 40 | The server certificate can then be found (on the LXD host) at: /var/lib/lxd/server.crt 41 | 42 | ## Disabling remote management for certain containers 43 | 44 | Sometimes you don't want people messing with your stuff. To that end, if you do not want LXDepot to manage a container, that is to say start, stop, delete (it will still be listed and you can view info on it), add this user flag to the container. It will tell LXDepot the container is off limits 45 | 46 | During creation add this to the config, the container will start and bootstrap and then be unmanageable by LXDepot 47 | ``` 48 | user.lxdepot_lock=true 49 | ``` 50 | 51 | Or from the command line: 52 | ``` 53 | lxc config set CONTAINERNAME user.lxdepot_lock true 54 | ``` 55 | 56 | ### Limitations 57 | 58 | First, this was an experiment in learning Go, so I'm sure there are a few things that make you go ... wat 59 | 60 | Secondly, everthing was initially developed for use at [Circonus](https://www.circonus.com) so perhaps some assumptions were made (like limiting to IPv4). 61 | 62 | Last, tests are light / not really exsistent for anything as this depends on a lot of external services to really do anything, and I haven't decided how to handle that in test yet 63 | -------------------------------------------------------------------------------- /cmd/lxdepot/lxdepot.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | /* LXDepot is a simple UI that lets one manage containers across multiple LXD hosts 4 | * 5 | * Usage (highlightling default values): 6 | * ./lxdepot -port=8080 -config=configs/config.yaml -webroot=web/ 7 | * 8 | * See README.md for more detailed information, and configs/sample.yaml for more 9 | * details on what a config should look like 10 | */ 11 | 12 | import ( 13 | "flag" 14 | "fmt" 15 | "log" 16 | "net/http" 17 | 18 | "github.com/neophenix/lxdepot/internal/config" 19 | "github.com/neophenix/lxdepot/internal/handlers" 20 | "github.com/neophenix/lxdepot/internal/handlers/ws" 21 | "github.com/neophenix/lxdepot/internal/lxd" 22 | ) 23 | 24 | // All our command line params and config 25 | var port string 26 | var conf string 27 | var webroot string 28 | var cacheTemplates bool 29 | 30 | // Conf is our main config 31 | var Conf *config.Config 32 | 33 | func main() { 34 | // Pull in all the command line params 35 | flag.StringVar(&port, "port", "8080", "port number to listen on") 36 | flag.StringVar(&conf, "config", "configs/config.yaml", "config file") 37 | flag.StringVar(&webroot, "webroot", "web/", "path of webroot (templates, static, etc)") 38 | flag.BoolVar(&cacheTemplates, "cache_templates", true, "cache templates or read from disk each time") 39 | flag.Parse() 40 | 41 | // Decided that printing out our "running config" was useful in the event things went awry 42 | fmt.Printf("webroot: " + webroot + "\n") 43 | fmt.Printf("config: " + conf + "\n") 44 | fmt.Printf("Listening on " + port + "\n") 45 | 46 | Conf = config.ParseConfig(conf) 47 | 48 | // Hand out our settings / config to everyone 49 | lxd.Conf = Conf 50 | handlers.Conf = Conf 51 | ws.Conf = Conf 52 | 53 | handlers.WebRoot = webroot 54 | handlers.CacheTemplates = cacheTemplates 55 | 56 | // Static file server 57 | fs := http.FileServer(http.Dir(webroot + "/static")) 58 | http.Handle("/static/", http.StripPrefix("/static/", fs)) 59 | http.Handle("/favicon.ico", fs) 60 | 61 | // Setup our routing 62 | handlers.AddRoute("/containers$", handlers.ContainerListHandler) 63 | handlers.AddRoute("/containers/.*$", handlers.ContainerHostListHandler) 64 | handlers.AddRoute("/container/new$", handlers.NewContainerHandler) 65 | handlers.AddRoute("/container/.*$", handlers.ContainerHandler) 66 | handlers.AddRoute("/images$", handlers.ImageListHandler) 67 | handlers.AddRoute("/hosts$", handlers.HostListHandler) 68 | handlers.AddRoute("/ws$", ws.Handler) 69 | 70 | // The root handler does all the route checking and handoffs 71 | http.HandleFunc("/", handlers.RootHandler) 72 | 73 | // our websocket maintenance function to clear out old buffers 74 | ws.ManageBuffers() 75 | 76 | log.Fatal(http.ListenAndServe(":"+port, nil)) 77 | } 78 | -------------------------------------------------------------------------------- /configs/sample.yaml: -------------------------------------------------------------------------------- 1 | # This sample config should have an example of every option available 2 | # if not pull requests welcome 3 | 4 | # PKI files can take two forms 5 | # first is file:PATH if we see file: we will read the contents off disk 6 | cert: file:/path/to/client/cert/here/cert.crt 7 | 8 | # Next you can just put the contents here 9 | key: | 10 | -----BEGIN RSA PRIVATE KEY----- 11 | ... 12 | -----END RSA PRIVATE KEY----- 13 | 14 | # lxdhosts is an array of the hosts we will operate against 15 | lxdhosts: 16 | # host is the ip or hostname we will use to communicate 17 | - host: 192.168.1.100 18 | # name is an alias for more human consumption 19 | name: mylxdhost 20 | # the port that lxd listens on 21 | port: 8443 22 | # the server cert can be a file path or contents like our client PKI 23 | cert: file:/path/to/cert/server.crt 24 | 25 | # dns lets us configure how our containers will get their IP addresses 26 | dns: 27 | # what provider to use (google / amazon / dhcp) 28 | provider: google 29 | # list of network blocks to look for a free IP in, inclusive (if we aren't using dhcp) 30 | network_blocks: 31 | - 10.0.0.0/32,10.0.1.255/32 32 | - 10.1.1.200/32,10.1.1.250/32 33 | # DNS ttl 34 | ttl: 300 35 | # The zone that will be appended to our container names 36 | # ex mycontainer would become mycontainer.dev.example.com 37 | zone: dev.example.com 38 | # provider options (dependent on provider) 39 | options: 40 | # GCP Options 41 | # Path to our GCP service account credentials file for adding and removing entries 42 | gcp_creds_file: /path/to/creds/service_account.json 43 | # our GCP project name 44 | gcp_project_name: example.com:dev 45 | # our GCP DNS zone name 46 | gcp_zone_name: example-dev-zone 47 | 48 | # AWS (Route 53) Options 49 | # Path to the shared credentials file as that seems recommended 50 | aws_creds_file: /path/to/shared/creds/file 51 | # Profile within the creds file to use 52 | aws_creds_profile: default 53 | # Hosted zone id 54 | aws_zone_id: THISIS123ATEST 55 | 56 | # networking currently houses "files" that will be parsed through text/template and passed 57 | # an IP to fill out, these are then uploaded to the container after creation and before starting 58 | # 59 | # The OS, etc is current hardcorded into the create container handler 60 | networking: 61 | # The OS name + release here has to match the image.os returned by LXD for it to run 62 | Centos7: 63 | # each network config script needs a remote path to tell lxdepot where to upload and a template 64 | - remote_path: /etc/sysconfig/network-scripts/ifcfg-eth0 65 | template: | 66 | DEVICE=eth0 67 | ONBOOT=yes 68 | BOOTPROTO=none 69 | IPADDR={{.IP}} 70 | NETMASK=255.255.255.0 71 | GATEWAY=192.168.1.1 72 | DNS1=8.8.8.8 73 | DNS2=1.1.1.1 74 | DOMAIN="dev.example.com" 75 | 76 | # bootstrap is a list of things we do after container start to get it into something we can use 77 | # this can upload files and run commands. Steps are run sequentially 78 | bootstrap: 79 | # Like in networking, the OS name + release here has to match the image.os returned by LXD for it to run 80 | Centos7: 81 | # file upload example. a lack of local_path and a remote_path ending in / tells the system 82 | # that we want to create a directory 83 | - type: file 84 | # perms set the permissions on the file in the container 85 | perms: 0700 86 | remote_path: /root/.ssh/ 87 | 88 | # this time we want to take a local file and upload its contents to the remote_path 89 | - type: file 90 | perms: 0600 91 | local_path: /var/tmp/root_auth_keys 92 | remote_path: /root/.ssh/authorized_keys 93 | 94 | # now using a command we can do things like install a ssh server 95 | - type: command 96 | command: [yum, -y, install, openssh-server] 97 | 98 | # and we can run a custom file we uploaded 99 | - type: command 100 | command: [/tmp/bootstrap.sh] 101 | 102 | # playbooks is a section to define anything else users might want to run on a container. 103 | # this would be things like, installing the right packages for a dev environment 104 | playbooks: 105 | # Like above, the OS name + release here has to match the image.os returned by LXD for it to run 106 | Centos7: 107 | # next we have a name of the playbook that your users would understand 108 | setupdev: 109 | # each section here follows the same format as bootstrap 110 | - type: command 111 | command: [yum, -y, install, golang] 112 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/neophenix/lxdepot 2 | 3 | go 1.18 4 | 5 | require ( 6 | github.com/aws/aws-sdk-go v1.44.179 7 | github.com/gorilla/websocket v1.5.0 8 | github.com/lxc/lxd v0.0.0-20230112212843-9f724666f1c9 9 | golang.org/x/oauth2 v0.5.0 10 | google.golang.org/api v0.110.0 11 | gopkg.in/yaml.v2 v2.4.0 12 | ) 13 | 14 | require ( 15 | cloud.google.com/go v0.107.0 // indirect 16 | cloud.google.com/go/compute v1.18.0 // indirect 17 | cloud.google.com/go/compute/metadata v0.2.3 // indirect 18 | github.com/flosch/pongo2 v0.0.0-20200913210552-0d938eb266f3 // indirect 19 | github.com/go-macaroon-bakery/macaroon-bakery/v3 v3.0.1 // indirect 20 | github.com/go-macaroon-bakery/macaroonpb v1.0.0 // indirect 21 | github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect 22 | github.com/golang/protobuf v1.5.2 // indirect 23 | github.com/google/uuid v1.3.0 // indirect 24 | github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect 25 | github.com/googleapis/gax-go/v2 v2.7.0 // indirect 26 | github.com/jmespath/go-jmespath v0.4.0 // indirect 27 | github.com/juju/webbrowser v1.0.0 // indirect 28 | github.com/julienschmidt/httprouter v1.3.0 // indirect 29 | github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect 30 | github.com/kr/fs v0.1.0 // indirect 31 | github.com/kr/pretty v0.3.1 // indirect 32 | github.com/pborman/uuid v1.2.1 // indirect 33 | github.com/pkg/sftp v1.13.5 // indirect 34 | github.com/pkg/xattr v0.4.9 // indirect 35 | github.com/robfig/cron/v3 v3.0.1 // indirect 36 | github.com/rogpeppe/fastuuid v1.2.0 // indirect 37 | github.com/sirupsen/logrus v1.9.0 // indirect 38 | go.opencensus.io v0.24.0 // indirect 39 | golang.org/x/crypto v0.5.0 // indirect 40 | golang.org/x/net v0.6.0 // indirect 41 | golang.org/x/sys v0.5.0 // indirect 42 | golang.org/x/term v0.5.0 // indirect 43 | golang.org/x/text v0.7.0 // indirect 44 | google.golang.org/appengine v1.6.7 // indirect 45 | google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc // indirect 46 | google.golang.org/grpc v1.53.0 // indirect 47 | google.golang.org/protobuf v1.28.1 // indirect 48 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect 49 | gopkg.in/errgo.v1 v1.0.1 // indirect 50 | gopkg.in/httprequest.v1 v1.2.1 // indirect 51 | gopkg.in/macaroon.v2 v2.1.0 // indirect 52 | ) 53 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | cloud.google.com/go v0.26.0 h1:e0WKqKTd5BnrG8aKH3J3h+QvEIQtSUcf2n5UZ5ZgLtQ= 2 | cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= 3 | cloud.google.com/go v0.107.0 h1:qkj22L7bgkl6vIeZDlOY2po43Mx/TIa2Wsa7VR+PEww= 4 | cloud.google.com/go v0.107.0/go.mod h1:wpc2eNrD7hXUTy8EKS10jkxpZBjASrORK7goS+3YX2I= 5 | cloud.google.com/go/compute v1.18.0 h1:FEigFqoDbys2cvFkZ9Fjq4gnHBP55anJ0yQyau2f9oY= 6 | cloud.google.com/go/compute v1.18.0/go.mod h1:1X7yHxec2Ga+Ss6jPyjxRxpu2uu7PLgsOVXvgU0yacs= 7 | cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= 8 | cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA= 9 | github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= 10 | github.com/aws/aws-sdk-go v1.44.179 h1:2mLZYSRc6awtjfD3XV+8NbuQWUVOo03/5VJ0tPenMJ0= 11 | github.com/aws/aws-sdk-go v1.44.179/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= 12 | github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= 13 | github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= 14 | github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= 15 | github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= 16 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 17 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 18 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 19 | github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= 20 | github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= 21 | github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= 22 | github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= 23 | github.com/flosch/pongo2 v0.0.0-20200913210552-0d938eb266f3 h1:fmFk0Wt3bBxxwZnu48jqMdaOR/IZ4vdtJFuaFV8MpIE= 24 | github.com/flosch/pongo2 v0.0.0-20200913210552-0d938eb266f3/go.mod h1:bJWSKrZyQvfTnb2OudyUjurSG4/edverV7n82+K3JiM= 25 | github.com/frankban/quicktest v1.0.0/go.mod h1:R98jIehRai+d1/3Hv2//jOVCTJhW1VBavT6B6CuGq2k= 26 | github.com/frankban/quicktest v1.2.2/go.mod h1:Qh/WofXFeiAFII1aEBu529AtJo6Zg2VHscnEsbBnJ20= 27 | github.com/frankban/quicktest v1.7.2/go.mod h1:jaStnuzAqU1AJdCO0l53JDCJrVDKcS03DbaAcR7Ks/o= 28 | github.com/frankban/quicktest v1.10.0/go.mod h1:ui7WezCLWMWxVWr1GETZY3smRy0G4KWq9vcPtJmFl7Y= 29 | github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY= 30 | github.com/go-macaroon-bakery/macaroon-bakery/v3 v3.0.1 h1:uvQJoKTHrFFu8zxoaopNKedRzwdy3+8H72we4T/5cGs= 31 | github.com/go-macaroon-bakery/macaroon-bakery/v3 v3.0.1/go.mod h1:H59IYeChwvD1po3dhGUPvq5na+4NVD7SJlbhGKvslr0= 32 | github.com/go-macaroon-bakery/macaroonpb v1.0.0 h1:It9exBaRMZ9iix1iJ6gwzfwsDE6ExNuwtAJ9e09v6XE= 33 | github.com/go-macaroon-bakery/macaroonpb v1.0.0/go.mod h1:UzrGOcbiwTXISFP2XDLDPjfhMINZa+fX/7A2lMd31zc= 34 | github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= 35 | github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY= 36 | github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= 37 | github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= 38 | github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 39 | github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 40 | github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 41 | github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= 42 | github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= 43 | github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= 44 | github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= 45 | github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= 46 | github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= 47 | github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= 48 | github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= 49 | github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= 50 | github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= 51 | github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= 52 | github.com/google/go-cmp v0.2.1-0.20190312032427-6f77996f0c42/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= 53 | github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= 54 | github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= 55 | github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= 56 | github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= 57 | github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= 58 | github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= 59 | github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= 60 | github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= 61 | github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= 62 | github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= 63 | github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= 64 | github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k= 65 | github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k= 66 | github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1YupQSSQ= 67 | github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8= 68 | github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc= 69 | github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= 70 | github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= 71 | github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= 72 | github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= 73 | github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= 74 | github.com/juju/qthttptest v0.1.1/go.mod h1:aTlAv8TYaflIiTDIQYzxnl1QdPjAg8Q8qJMErpKy6A4= 75 | github.com/juju/qthttptest v0.1.3 h1:M0HdpwsK/UTHRGRcIw5zvh5z+QOgdqyK+ecDMN+swwM= 76 | github.com/juju/webbrowser v1.0.0 h1:JLdmbFtCGY6Qf2jmS6bVaenJFGIFkdF1/BjUm76af78= 77 | github.com/juju/webbrowser v1.0.0/go.mod h1:RwVlbBcF91Q4vS+iwlkJ6bZTE3EwlrjbYlM3WMVD6Bc= 78 | github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U= 79 | github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= 80 | github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= 81 | github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= 82 | github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8= 83 | github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= 84 | github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= 85 | github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= 86 | github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= 87 | github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= 88 | github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= 89 | github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 90 | github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= 91 | github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= 92 | github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= 93 | github.com/lxc/lxd v0.0.0-20230112212843-9f724666f1c9 h1:9jyE4wA3OY6uidhHA+jciKLGZ39kqMFl/KBHRcn++fA= 94 | github.com/lxc/lxd v0.0.0-20230112212843-9f724666f1c9/go.mod h1:Skp5le/Vsb1+NAsEcPZnRP4VDipOkew9ItpbC/7I8e4= 95 | github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= 96 | github.com/pborman/uuid v1.2.1 h1:+ZZIw58t/ozdjRaXh/3awHfmWRbzYxJoAdNJxe/3pvw= 97 | github.com/pborman/uuid v1.2.1/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= 98 | github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= 99 | github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 100 | github.com/pkg/sftp v1.13.5 h1:a3RLUqkyjYRtBTZJZ1VRrKbN3zhuPLlUc3sphVz81go= 101 | github.com/pkg/sftp v1.13.5/go.mod h1:wHDZ0IZX6JcBYRK1TH9bcVq8G7TLpVHYIGJRFnmPfxg= 102 | github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE= 103 | github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU= 104 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 105 | github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= 106 | github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= 107 | github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs= 108 | github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro= 109 | github.com/rogpeppe/fastuuid v1.2.0 h1:Ppwyp6VYCF1nvBTXL3trRso7mXMlRrw9ooo375wvi2s= 110 | github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= 111 | github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= 112 | github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= 113 | github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= 114 | github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= 115 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 116 | github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= 117 | github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= 118 | github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 119 | github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 120 | github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= 121 | github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk= 122 | github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= 123 | github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= 124 | github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= 125 | go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= 126 | go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= 127 | golang.org/x/crypto v0.0.0-20180723164146-c126467f60eb/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= 128 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 129 | golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= 130 | golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= 131 | golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= 132 | golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= 133 | golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE= 134 | golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU= 135 | golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= 136 | golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= 137 | golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= 138 | golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= 139 | golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= 140 | golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= 141 | golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 142 | golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 143 | golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 144 | golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 145 | golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 146 | golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= 147 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 148 | golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 149 | golang.org/x/net v0.0.0-20200505041828-1ed23360d12c/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= 150 | golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= 151 | golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= 152 | golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= 153 | golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= 154 | golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= 155 | golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw= 156 | golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws= 157 | golang.org/x/net v0.6.0 h1:L4ZwwTvKW9gr0ZMS1yrHD9GZhIuVjOBBnaKH+SPQK0Q= 158 | golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= 159 | golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs= 160 | golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= 161 | golang.org/x/oauth2 v0.5.0 h1:HuArIo48skDwlrvM3sEdHXElYslAMsf3KwRkkW4MC4s= 162 | golang.org/x/oauth2 v0.5.0/go.mod h1:9/XBHVqLaWO3/BRHs5jbpYCnOZVjj5V0ndyaAM7KB4I= 163 | golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 164 | golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 165 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 166 | golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 167 | golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 168 | golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 169 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 170 | golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 171 | golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 172 | golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 173 | golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 174 | golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 175 | golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 176 | golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 177 | golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 178 | golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 179 | golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 180 | golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 181 | golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 182 | golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18= 183 | golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 184 | golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU= 185 | golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 186 | golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= 187 | golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= 188 | golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= 189 | golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg= 190 | golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ= 191 | golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY= 192 | golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= 193 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 194 | golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= 195 | golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= 196 | golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= 197 | golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= 198 | golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= 199 | golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo= 200 | golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= 201 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 202 | golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 203 | golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= 204 | golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= 205 | golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= 206 | golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= 207 | golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= 208 | golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= 209 | golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 210 | golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 211 | golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= 212 | golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 213 | google.golang.org/api v0.0.0-20180712151135-781db45e5b94 h1:y8/X5gvCyEbzS4TFnhd2/TgpLas4tVcpDBHn4GG80Tw= 214 | google.golang.org/api v0.0.0-20180712151135-781db45e5b94/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= 215 | google.golang.org/api v0.110.0 h1:l+rh0KYUooe9JGbGVx71tbFo4SMbMTXK3I3ia2QSEeU= 216 | google.golang.org/api v0.110.0/go.mod h1:7FC4Vvx1Mooxh8C5HWjzZHcavuS2f6pmJpZx60ca7iI= 217 | google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= 218 | google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= 219 | google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= 220 | google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= 221 | google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= 222 | google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= 223 | google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= 224 | google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= 225 | google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc h1:ijGwO+0vL2hJt5gaygqP2j6PfflOBrRot0IczKbmtio= 226 | google.golang.org/genproto v0.0.0-20230209215440-0dfe4f8abfcc/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM= 227 | google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= 228 | google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= 229 | google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= 230 | google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= 231 | google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= 232 | google.golang.org/grpc v1.53.0 h1:LAv2ds7cmFV/XTS3XG1NneeENYrXGmorPxsBbptIjNc= 233 | google.golang.org/grpc v1.53.0/go.mod h1:OnIrk0ipVdj4N5d9IUoFUx72/VlD7+jUsHwZgwSMQpw= 234 | google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= 235 | google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= 236 | google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= 237 | google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= 238 | google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= 239 | google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= 240 | google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= 241 | google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= 242 | google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= 243 | google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= 244 | google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= 245 | google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w= 246 | google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= 247 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 248 | gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 249 | gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 250 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= 251 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= 252 | gopkg.in/errgo.v1 v1.0.0/go.mod h1:CxwszS/Xz1C49Ucd2i6Zil5UToP1EmyrFhKaMVbg1mk= 253 | gopkg.in/errgo.v1 v1.0.1 h1:oQFRXzZ7CkBGdm1XZm/EbQYaYNNEElNBOd09M6cqNso= 254 | gopkg.in/errgo.v1 v1.0.1/go.mod h1:3NjfXwocQRYAPTq4/fzX+CwUhPRcR/azYRhj8G+LqMo= 255 | gopkg.in/httprequest.v1 v1.2.1 h1:pEPLMdF/gjWHnKxLpuCYaHFjc8vAB2wrYjXrqDVC16E= 256 | gopkg.in/httprequest.v1 v1.2.1/go.mod h1:x2Otw96yda5+8+6ZeWwHIJTFkEHWP/qP8pJOzqEtWPM= 257 | gopkg.in/macaroon.v2 v2.1.0 h1:HZcsjBCzq9t0eBPMKqTN/uSN6JOm78ZJ2INbqcBQOUI= 258 | gopkg.in/macaroon.v2 v2.1.0/go.mod h1:OUb+TQP/OP0WOerC2Jp/3CwhIKyIa9kQjuc7H24e6/o= 259 | gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22 h1:VpOs+IwYnYBaFnrNAeB8UUWtL3vEUnzSCL1nVjPhqrw= 260 | gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= 261 | gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 262 | gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 263 | gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= 264 | gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= 265 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 266 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 267 | gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 268 | honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 269 | honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 270 | -------------------------------------------------------------------------------- /internal/circularbuffer/circularbuffer.go: -------------------------------------------------------------------------------- 1 | package circularbuffer 2 | 3 | // a simple circular buffer to store messages that our web clients can read later, allowing command results to be stored 4 | // and read later so new page loads can read what they may have requested previously. 5 | 6 | import ( 7 | "sync" 8 | "time" 9 | ) 10 | 11 | // BUFLEN is the length of our buffer, not sure what a good value is here, doesn't need to hold everything but does need 12 | // to be enough to maintain context on what users are seeing when they read it 13 | const BUFLEN = 20 14 | 15 | // RECENTACCESS is the number of seconds that we compare againt the lastAccess time of a buffer to determine if it was 16 | // "recent" or not 17 | const RECENTACCESS = 86400 18 | 19 | type CircularBuffer[T any] struct { 20 | head int8 // "write" pointer 21 | tail int8 // "read" pointer 22 | buffer [BUFLEN]T 23 | lastAccess time.Time 24 | lock sync.Mutex 25 | } 26 | 27 | // Enqueue adds a string to our buffer and moves the head foward. If the new head pointer is going to equal our tail 28 | // then we are overwriting unread messages and need to push the tail ahead to maintain our circle 29 | func (c *CircularBuffer[T]) Enqueue(msg T) { 30 | c.lock.Lock() 31 | defer c.lock.Unlock() 32 | // record that we have accessed the buffer 33 | c.lastAccess = time.Now() 34 | // put our message in 35 | c.buffer[c.head] = msg 36 | // get our new head pointer 37 | newHead := (c.head + 1) % BUFLEN 38 | if newHead == c.tail { 39 | // maintain our circle if we have caught up to the tail, technically we could still read this tail value and 40 | // move on, but it greatly simplifies things to just move it 41 | c.tail = (c.tail + 1) % BUFLEN 42 | } 43 | // finally move the head forward 44 | c.head = newHead 45 | } 46 | 47 | // Dequeue takes the first unread message, moves our tail pointer ahead and returns the message. We also return an "ok" 48 | // boolean here so we can distinguish between an actual empty string and no message 49 | func (c *CircularBuffer[T]) Dequeue() (T, bool) { 50 | c.lock.Lock() 51 | defer c.lock.Unlock() 52 | var msg T 53 | // record that we have accessed the buffer 54 | c.lastAccess = time.Now() 55 | // if our head and tail are equal there is nothing in our buffer to return 56 | if c.head == c.tail { 57 | return msg, false 58 | } 59 | 60 | msg = c.buffer[c.tail] 61 | c.tail = (c.tail + 1) % BUFLEN 62 | return msg, true 63 | } 64 | 65 | // HasRecentAccess returns true / false if a buffer has been "recently" accessed. Check the value of RECENTACCESS for 66 | // what we consider recent 67 | func (c *CircularBuffer[T]) HasRecentAccess() bool { 68 | // buffers shouldn't be created until they are about to be used, this is internal so that should be fine. So that 69 | // being the case we would make it and then immediately put something in it which would set the lastAccess. So if 70 | // its not set then we assume its very new. This could lead to memory leaks and we will need to move to a constructor 71 | // but for now lets experiment with this. 72 | if c.lastAccess.IsZero() { 73 | return true 74 | } 75 | diff := time.Now().Sub(c.lastAccess) 76 | // we don't need to go too crazy, just compare to seconds in a day 77 | if diff.Seconds() < RECENTACCESS { 78 | return true 79 | } 80 | return false 81 | } 82 | -------------------------------------------------------------------------------- /internal/circularbuffer/circularbuffer_test.go: -------------------------------------------------------------------------------- 1 | package circularbuffer 2 | 3 | import ( 4 | "strings" 5 | "testing" 6 | "time" 7 | ) 8 | 9 | type bufTest struct { 10 | op string // operation to perform (enqueue,dequeue) 11 | value string // value or csv list of what to enqueue or values that will come from dequeues 12 | ok bool // only for dequeue, the ok result 13 | head int8 // expected head value 14 | tail int8 // expected tail value 15 | } 16 | 17 | // test enqueue and dequeue operations 18 | func TestBuffer(t *testing.T) { 19 | buffer := &CircularBuffer[string]{} 20 | 21 | tests := []bufTest{ 22 | {op: "dequeue", value: "", ok: false, head: 0, tail: 0}, 23 | {op: "enqueue", value: "a", head: 1, tail: 0}, 24 | {op: "enqueue", value: "b,c,d,e,f,g,h", head: 8, tail: 0}, 25 | {op: "dequeue", value: "a", ok: true, head: 8, tail: 1}, 26 | {op: "dequeue", value: "b", ok: true, head: 8, tail: 2}, 27 | {op: "enqueue", value: "i,j,k,l,m,n,o,p,q,r,s,t", head: 0, tail: 2}, 28 | {op: "dequeue", value: "c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s", ok: true, head: 0, tail: 19}, 29 | {op: "dequeue", value: "t", ok: true, head: 0, tail: 0}, 30 | {op: "dequeue", value: "", ok: false, head: 0, tail: 0}, 31 | } 32 | 33 | for tidx, test := range tests { 34 | if test.op == "enqueue" { 35 | for _, v := range strings.Split(test.value, ",") { 36 | buffer.Enqueue(v) 37 | } 38 | } else if test.op == "dequeue" { 39 | for _, v := range strings.Split(test.value, ",") { 40 | bufval, ok := buffer.Dequeue() 41 | 42 | if test.ok != ok { 43 | t.Errorf("%v: expected ok %v got %v", tidx, test.ok, ok) 44 | } 45 | if v != bufval { 46 | t.Errorf("%v: expected %v got %v", tidx, v, bufval) 47 | } 48 | } 49 | } 50 | 51 | if test.head != buffer.head { 52 | t.Errorf("%v: expected head to be %v got %v", tidx, test.head, buffer.head) 53 | } 54 | if test.tail != buffer.tail { 55 | t.Errorf("%v: expected tail to be %v got %v", tidx, test.tail, buffer.tail) 56 | } 57 | } 58 | } 59 | 60 | // some basic tests for recent access where we will force various time objects into a buffer 61 | func TestRecentAccess(t *testing.T) { 62 | buffer := &CircularBuffer[string]{} 63 | if !buffer.HasRecentAccess() { 64 | t.Error("expected new buffer HasRecentAccess to be true, but it is false") 65 | } 66 | 67 | buffer.lastAccess = time.Now().Add(-100 * time.Second) 68 | if !buffer.HasRecentAccess() { 69 | t.Error("expected recent buffer HasRecentAccess to be true, but it is false") 70 | } 71 | 72 | buffer.lastAccess = time.Now().Add(-86401 * time.Second) 73 | if buffer.HasRecentAccess() { 74 | t.Error("expected old buffer HasRecentAccess to be false, but it is true") 75 | } 76 | } 77 | -------------------------------------------------------------------------------- /internal/config/config.go: -------------------------------------------------------------------------------- 1 | // Package config provides all the structure and functions for parsing and dealing 2 | // with the yaml config file 3 | package config 4 | 5 | import ( 6 | "log" 7 | "os" 8 | "strconv" 9 | "strings" 10 | 11 | "gopkg.in/yaml.v2" 12 | ) 13 | 14 | // structs here are all in reverse order with our main config last 15 | 16 | // LXDhost is where the details of each host we are going to talk to lives 17 | type LXDhost struct { 18 | Host string `yaml:"host"` // The ip or fqdn we use to actually talk to the host 19 | Name string `yaml:"name"` // A human readable name / "alias" for the UI 20 | Port string `yaml:"port"` // The port that LXD is listening on 21 | Cert string `yaml:"cert"` // The server cert typically found in /var/lib/lxd/server.crt 22 | } 23 | 24 | // DNS settings, or are we using DHCP or a 3rd party provider 25 | type DNS struct { 26 | Provider string `yaml:"provider"` // Provider name: google, amazon, dhcp 27 | NetworkBlocks []string `yaml:"network_blocks"` // List of blocks that we can use for IPs, if not defined we can use any IP in the network 28 | TTL int `yaml:"ttl"` // Default TTL of DNS entries 29 | Zone string `yaml:"zone"` // DNS zone 30 | Options map[string]string `yaml:"options"` // Providers options documented at the top of a provider implementation 31 | } 32 | 33 | // FileOrCommand is for bootstrapping or other setup, used as an array of sequential "things to do" 34 | // file will upload a file to the container, command will run a command on it 35 | type FileOrCommand struct { 36 | Type string `yaml:"type"` // file or command, what we are going to do 37 | Perms int `yaml:"perms"` // for Type=file, the permissions of the file in the container 38 | LocalPath string `yaml:"local_path"` // for Type=file, the local path to the file we want to upload 39 | RemotePath string `yaml:"remote_path"` // for Type=file, where the file will live in the container 40 | Command []string `yaml:"command"` // for Type=command, the command broken apart like ["yum", "-y", "install", "foo"] 41 | OkReturnValues []float64 `yaml:"ok_return_values"` // list of return values (other than 0) we accept as ok, 0 is always acceptable 42 | } 43 | 44 | // NetworkingConfig holds a network file template and the location where it should be placed in the container 45 | type NetworkingConfig struct { 46 | RemotePath string `yaml:"remote_path"` // path of the file in the container 47 | Template string `yaml:"template"` // text/template parsable version of the file 48 | } 49 | 50 | // Config is the main config structure mostly pulling together the above items, also holds our client PKI 51 | type Config struct { 52 | Cert string `yaml:"cert"` // client cert, which can either be the cert contents or file:/path/here that we will read in later 53 | Key string `yaml:"key"` // client key, same as cert, contents or file:/path/here 54 | LXDhosts []*LXDhost `yaml:"lxdhosts"` // array of all the hosts we will operate on 55 | DNS DNS `yaml:"dns"` // DNS settings 56 | Networking map[string][]NetworkingConfig `yaml:"networking"` // map of OS -> network template files 57 | Bootstrap map[string][]FileOrCommand `yaml:"bootstrap"` // map to the OS type, and then an array of things to do 58 | Playbooks map[string]map[string][]FileOrCommand `yaml:"playbooks"` // map of OS -> playbook name -> list of things to do 59 | } 60 | 61 | // ParseConfig is the only function that external users need to know about. 62 | // This will read the file from disk and, as its name implies, parse and unmarshal it using yaml 63 | // We then call verifyConfig to make sure all the needed settings are there, any error we encounter 64 | // a user should know about on startup so we log and die 65 | func ParseConfig(configFile string) *Config { 66 | bytes, err := os.ReadFile(configFile) 67 | if err != nil { 68 | log.Fatal("Could not read config [" + configFile + "] : " + err.Error() + "\n") 69 | } 70 | 71 | var config Config 72 | err = yaml.Unmarshal(bytes, &config) 73 | if err != nil { 74 | log.Fatal("Could not parse config [" + configFile + "] : " + err.Error() + "\n") 75 | } 76 | 77 | // because there is no conformity with image os names / releases we are going to lowercase 78 | // them all in our internal struct here so we have some sanity 79 | for os := range config.Networking { 80 | config.Networking[strings.ToLower(os)] = config.Networking[os] 81 | } 82 | for os := range config.Bootstrap { 83 | config.Bootstrap[strings.ToLower(os)] = config.Bootstrap[os] 84 | } 85 | for os := range config.Playbooks { 86 | config.Playbooks[strings.ToLower(os)] = config.Playbooks[os] 87 | } 88 | 89 | config.verifyConfig() 90 | 91 | return &config 92 | } 93 | 94 | // verifyConfig checks to make sure that the absolutely needed parts are here. 95 | // If you are using bootstrapping or a third party DNS more checking should be 96 | // added for those items 97 | func (c *Config) verifyConfig() { 98 | if c.Cert == "" { 99 | log.Fatal("cert (client certificate) missing from config\n") 100 | } 101 | c.Cert = getValueOrFileContents(c.Cert) 102 | 103 | if c.Key == "" { 104 | log.Fatal("key (client key) missing from config\n") 105 | } 106 | c.Key = getValueOrFileContents(c.Key) 107 | 108 | if len(c.LXDhosts) == 0 { 109 | log.Fatal("no lxdhosts defined\n") 110 | } 111 | 112 | for idx, lxdh := range c.LXDhosts { 113 | if lxdh.Host == "" { 114 | log.Fatal("missing host param for lxdhost at index: " + strconv.Itoa(idx) + "\n") 115 | } 116 | if lxdh.Cert == "" { 117 | log.Fatal("missing certificate for lxdhost: " + lxdh.Host + "\n") 118 | } 119 | lxdh.Cert = getValueOrFileContents(lxdh.Cert) 120 | } 121 | } 122 | 123 | // getValueOrFileContents is used by verifyConfig to check if the value of a param is file:/path 124 | // or not. If it is, we read the file from disk and return the contents, if it isn't we just return 125 | // the value we were passed 126 | func getValueOrFileContents(value string) string { 127 | if strings.HasPrefix(value, "file:") { 128 | data, err := os.ReadFile(strings.TrimPrefix(value, "file:")) 129 | if err != nil { 130 | log.Fatal("Could not read file " + strings.TrimPrefix(value, "file:") + " : " + err.Error()) 131 | } 132 | return string(data) 133 | } 134 | 135 | return value 136 | } 137 | -------------------------------------------------------------------------------- /internal/dns/amazon.go: -------------------------------------------------------------------------------- 1 | package dns 2 | 3 | import ( 4 | "errors" 5 | "strconv" 6 | "strings" 7 | "time" 8 | 9 | "github.com/aws/aws-sdk-go/aws" 10 | "github.com/aws/aws-sdk-go/aws/credentials" 11 | "github.com/aws/aws-sdk-go/aws/session" 12 | "github.com/aws/aws-sdk-go/service/route53" 13 | ) 14 | 15 | // AmazonDNS stores all the options we need to talk to Route 53 16 | type AmazonDNS struct { 17 | CredsFile string // the shared credentials filename (full path) 18 | Profile string // profile within the creds file to use, "" for default 19 | ZoneID string // the Route 53 DNS zone we are using 20 | } 21 | 22 | // amazonRrsetCache is the cache of all the recordsets, figure if the system is in 23 | // regular use its better to store these for a few minutes than make a call each time 24 | type amazonRrsetCache struct { 25 | Rrsets []*route53.ResourceRecordSet 26 | CacheTime time.Time 27 | } 28 | 29 | var acache amazonRrsetCache 30 | 31 | // NewAmazonDNS will return our Amazon Route 53 DNS interface 32 | func NewAmazonDNS(credsfile string, profile string, zoneid string) *AmazonDNS { 33 | return &AmazonDNS{ 34 | CredsFile: credsfile, 35 | Profile: profile, 36 | ZoneID: zoneid, 37 | } 38 | } 39 | 40 | // getDNSService takes the credentials and should return a Route 53 DNS service 41 | func (a *AmazonDNS) getDNSService() (*route53.Route53, error) { 42 | session, err := session.NewSession(&aws.Config{ 43 | Credentials: credentials.NewSharedCredentials(a.CredsFile, a.Profile), 44 | }) 45 | if err != nil { 46 | return nil, errors.New("Could not create new Route 53 session: " + err.Error()) 47 | } 48 | 49 | service := route53.New(session) 50 | 51 | return service, nil 52 | } 53 | 54 | // getZoneRecordSet either returns our cache of records or fetches new ones. 55 | func (a *AmazonDNS) getZoneRecordSet() error { 56 | if acache.CacheTime != (time.Time{}) { 57 | now := time.Now() 58 | if now.Sub(acache.CacheTime).Seconds() <= 30 { 59 | return nil 60 | } 61 | acache = amazonRrsetCache{} 62 | } 63 | 64 | service, err := a.getDNSService() 65 | if err != nil { 66 | return err 67 | } 68 | 69 | params := &route53.ListResourceRecordSetsInput{ 70 | HostedZoneId: aws.String(a.ZoneID), 71 | } 72 | 73 | err = service.ListResourceRecordSetsPages(params, 74 | func(page *route53.ListResourceRecordSetsOutput, lastPage bool) bool { 75 | acache.Rrsets = append(acache.Rrsets, page.ResourceRecordSets...) 76 | return lastPage 77 | }) 78 | if err != nil { 79 | return err 80 | } 81 | 82 | acache.CacheTime = time.Now() 83 | return nil 84 | } 85 | 86 | // createARecord creates the entry in Route 53 87 | func (a *AmazonDNS) createARecord(name string, ip string) error { 88 | service, err := a.getDNSService() 89 | if err != nil { 90 | return err 91 | } 92 | 93 | // This is all internal so this should be safe, but check anyway, if it doesn't have a . assume we need to 94 | // append the zone name to our hostname, name needs to end in . for AWS to accept it 95 | if !strings.Contains(name, ".") { 96 | name = name + "." + DNSOptions.Zone + "." 97 | } 98 | 99 | params := &route53.ChangeResourceRecordSetsInput{ 100 | ChangeBatch: &route53.ChangeBatch{ 101 | Changes: []*route53.Change{ 102 | { 103 | Action: aws.String("UPSERT"), 104 | ResourceRecordSet: &route53.ResourceRecordSet{ 105 | Name: aws.String(name), 106 | Type: aws.String("A"), 107 | ResourceRecords: []*route53.ResourceRecord{ 108 | { 109 | Value: aws.String(ip), 110 | }, 111 | }, 112 | TTL: aws.Int64(int64(DNSOptions.TTL)), 113 | Weight: aws.Int64(1), 114 | SetIdentifier: aws.String("lxdepot"), 115 | }, 116 | }, 117 | }, 118 | Comment: aws.String("Adding A record for " + name), 119 | }, 120 | HostedZoneId: aws.String(a.ZoneID), 121 | } 122 | _, err = service.ChangeResourceRecordSets(params) 123 | 124 | return err // will either be an error or nil, either way what we want to return at this point 125 | } 126 | 127 | // deleteARecord removes the host from DNS. At the moment it removes all the records for the host, so the 128 | // name is a little bit misleading. It does this by pulling the records sets into cache, and just matching 129 | // the correct record set by name, and passing that back as a deletion. 130 | func (a *AmazonDNS) deleteARecord(name string) error { 131 | service, err := a.getDNSService() 132 | if err != nil { 133 | return err 134 | } 135 | 136 | // Make sure our cache is up to date 137 | err = a.getZoneRecordSet() 138 | if err != nil { 139 | return err 140 | } 141 | 142 | // Like in createARecord, if we don't have a . in the name assume we need to append everything. I think 143 | // ideally we should reject hostnames with a . in them and just force us to the the arbiter of a good name 144 | if !strings.Contains(name, ".") { 145 | name = name + "." + DNSOptions.Zone + "." 146 | } 147 | 148 | // Loop over our cache and grab the recordset by name, we will pass this to our delete request 149 | var rrset *route53.ResourceRecordSet 150 | for _, set := range acache.Rrsets { 151 | if *set.Type == "A" && *set.Name == name { 152 | rrset = set 153 | break 154 | } 155 | } 156 | 157 | // if we found a record set, remove it 158 | if rrset != nil { 159 | params := &route53.ChangeResourceRecordSetsInput{ 160 | ChangeBatch: &route53.ChangeBatch{ 161 | Changes: []*route53.Change{ 162 | { 163 | Action: aws.String("DELETE"), 164 | ResourceRecordSet: rrset, 165 | }, 166 | }, 167 | Comment: aws.String("Deleting A record for " + name), 168 | }, 169 | HostedZoneId: aws.String(a.ZoneID), 170 | } 171 | _, err := service.ChangeResourceRecordSets(params) 172 | if err != nil { 173 | return err 174 | } 175 | 176 | // Pop the cache instead of trying to be clever 177 | acache.CacheTime = time.Time{} 178 | } 179 | 180 | return nil 181 | } 182 | 183 | // GetARecord returns an A record for our host. If the host already has one, 184 | // this will return the first record encountered, it does not currently ensure that 185 | // record is in the network we are asking for. If there is no existing record, it will 186 | // loop over a 3 dimensional array looking for a free entry to use. 187 | func (a *AmazonDNS) GetARecord(name string, networkBlocks []string) (string, error) { 188 | // Make sure our cache is up to date 189 | err := a.getZoneRecordSet() 190 | if err != nil { 191 | return "", err 192 | } 193 | 194 | // Make sure we are looking for the fqdn 195 | if !strings.Contains(name, ".") { 196 | name = name + "." + DNSOptions.Zone + "." 197 | } 198 | 199 | // This is going to "mark off" all the records we have, so then we can loop over it and find a free spot 200 | var list [256][256][256]int 201 | for _, set := range acache.Rrsets { 202 | if *set.Type == "A" { 203 | // We already have our host in DNS 204 | if *set.Name == name { 205 | return *set.ResourceRecords[0].Value, nil 206 | } 207 | 208 | for _, rr := range set.ResourceRecords { 209 | octets := strings.Split(*rr.Value, ".") 210 | o2, _ := strconv.Atoi(octets[1]) 211 | o3, _ := strconv.Atoi(octets[2]) 212 | o4, _ := strconv.Atoi(octets[3]) 213 | list[o2][o3][o4] = 1 214 | } 215 | } 216 | } 217 | 218 | ip, err := findFreeARecord(&list, networkBlocks) 219 | if err != nil { 220 | return "", err 221 | } 222 | 223 | err = a.createARecord(name, ip) 224 | // just return the IP we found and err which will be an error or nil, as one should check that first 225 | return ip, err 226 | } 227 | 228 | // RemoveARecord passes our name to deleteARecord as it doesn't have to do any additional processing 229 | func (a *AmazonDNS) RemoveARecord(name string) error { 230 | err := a.deleteARecord(name) 231 | return err 232 | } 233 | 234 | // ListARecords repopulates the internal cache and then appends any A records it finds to a 235 | /// RecordList array and returns that 236 | func (a *AmazonDNS) ListARecords() ([]RecordList, error) { 237 | var list []RecordList 238 | 239 | // Make sure our cache is up to date 240 | err := a.getZoneRecordSet() 241 | if err != nil { 242 | return list, err 243 | } 244 | 245 | for _, set := range acache.Rrsets { 246 | if *set.Type == "A" { 247 | records := make([]string, len(set.ResourceRecords)) 248 | for idx, rr := range set.ResourceRecords { 249 | records[idx] = *rr.Value 250 | } 251 | list = append(list, RecordList{Name: *set.Name, RecordSet: records}) 252 | } 253 | } 254 | 255 | return list, nil 256 | } 257 | -------------------------------------------------------------------------------- /internal/dns/dns.go: -------------------------------------------------------------------------------- 1 | // Package dns is for our 3rd party DNS integrations 2 | package dns 3 | 4 | import ( 5 | "errors" 6 | "fmt" 7 | "net" 8 | "strings" 9 | 10 | "github.com/neophenix/lxdepot/internal/config" 11 | ) 12 | 13 | // RecordList is a simple look at DNS records used as a common return for our interface 14 | type RecordList struct { 15 | Name string // the name of the entry 16 | RecordSet []string // the values in the entry 17 | } 18 | 19 | // The DNS interface provides the list of functions all our 3rd party integrations should 20 | // support. I don't like that I coded the record type in the name, but until I decide 21 | // I need IPv6, etc its good enough 22 | type DNS interface { 23 | GetARecord(name string, networkBlocks []string) (string, error) // returns a string representation of an IPv4 address 24 | RemoveARecord(name string) error // removes the record from our 3rd party 25 | ListARecords() ([]RecordList, error) // returns a list of all the A records 26 | } 27 | 28 | // DNSOptions holds the various options from the main config we might want to use, this does 29 | // mean these values are in multiple places, which is odd but they dont' change execpt on restart (today) 30 | var DNSOptions config.DNS 31 | 32 | // New should just hand back the appropriate interface for our config settings, 33 | // returning from the correct "New" function for our integration 34 | func New(conf *config.Config) DNS { 35 | DNSOptions = conf.DNS 36 | 37 | if conf.DNS.Provider == "google" { 38 | return NewGoogleDNS(conf.DNS.Options["gcp_creds_file"], conf.DNS.Options["gcp_project_name"], conf.DNS.Options["gcp_zone_name"]) 39 | } else if conf.DNS.Provider == "amazon" { 40 | return NewAmazonDNS(conf.DNS.Options["aws_creds_file"], conf.DNS.Options["aws_creds_profile"], conf.DNS.Options["aws_zone_id"]) 41 | } 42 | 43 | return nil 44 | } 45 | 46 | // findFreeARecord takes a populated list of octets 2->4 and a list of network blocks, looks through the list 47 | // to find an entry != 0 indicating that IP is free and returns it. Blocks are used in order and we skip 48 | // 0 and 255 for octet4 49 | func findFreeARecord(list *[256][256][256]int, networkBlocks []string) (string, error) { 50 | for _, block := range networkBlocks { 51 | ips := strings.Split(block, ",") 52 | _, startnet, err := net.ParseCIDR(strings.TrimSpace(ips[0])) 53 | if err != nil { 54 | return "", err 55 | } 56 | _, endnet, err := net.ParseCIDR(strings.TrimSpace(ips[1])) 57 | if err != nil { 58 | return "", err 59 | } 60 | 61 | octet1 := int(startnet.IP[0]) 62 | octet2 := int(startnet.IP[1]) 63 | octet3 := int(startnet.IP[2]) 64 | octet4 := int(startnet.IP[3]) 65 | 66 | // don't hand back a .0 67 | if octet4 == 0 { 68 | octet4 = 1 69 | } 70 | 71 | for ; octet2 <= 255; octet2++ { 72 | if octet2 > int(endnet.IP[1]) { 73 | break 74 | } 75 | 76 | for ; octet3 <= 255; octet3++ { 77 | if octet3 > int(endnet.IP[2]) { 78 | break 79 | } 80 | 81 | // don't hand out a .255 so only look up to .254 82 | for ; octet4 <= 254; octet4++ { 83 | if octet4 > int(endnet.IP[3]) { 84 | break 85 | } 86 | 87 | if list[octet2][octet3][octet4] == 0 { 88 | return fmt.Sprintf("%v.%v.%v.%v", octet1, octet2, octet3, octet4), nil 89 | } 90 | } 91 | octet4 = 1 92 | } 93 | octet3 = 0 94 | } 95 | } 96 | 97 | return "", errors.New("Could not find a free A record") 98 | } 99 | -------------------------------------------------------------------------------- /internal/dns/dns_test.go: -------------------------------------------------------------------------------- 1 | package dns 2 | 3 | import "testing" 4 | 5 | func TestFindFreeARecord(t *testing.T) { 6 | var list [256][256][256]int 7 | 8 | // Test 1, make sure we can find a free IP in a simple case where 0 -> 49 are used 9 | for i := 0; i < 50; i++ { 10 | list[0][0][i] = 1 11 | } 12 | ip, err := findFreeARecord(&list, []string{"10.0.0.2/32,10.0.0.100/32"}) 13 | if err != nil { 14 | t.FailNow() 15 | } 16 | if ip != "10.0.0.50" { 17 | t.Errorf("T1: Expected 10.0.0.50 got %v", ip) 18 | } 19 | 20 | // Test 2, could not find a record 21 | ip, err = findFreeARecord(&list, []string{"10.0.0.2/32, 10.0.0.40/32"}) 22 | if ip != "" { 23 | t.Errorf("T2: Expected no ip got %v", ip) 24 | } 25 | 26 | // Test 3, find an IP in a second block passed when the first is used up 27 | ip, err = findFreeARecord(&list, []string{"10.0.0.2/32,10.0.0.25/32", "10.0.0.40/32, 10.0.0.100/32"}) 28 | if err != nil { 29 | t.FailNow() 30 | } 31 | if ip != "10.0.0.50" { 32 | t.Errorf("T3: Expected 10.0.0.50 got %v", ip) 33 | } 34 | 35 | // Test 4, find an IP after exhausting the 3rd octet 36 | for i := 0; i < 256; i++ { 37 | list[0][0][i] = 1 38 | } 39 | ip, err = findFreeARecord(&list, []string{"10.0.0.2/32,10.0.1.255/32"}) 40 | if err != nil { 41 | t.FailNow() 42 | } 43 | if ip != "10.0.1.1" { 44 | t.Errorf("T4: Expected 10.0.1.1 got %v", ip) 45 | } 46 | 47 | // Test 5, find a record where the end block's 4th octet is smaller than the start block's 48 | // basically the same as above but I figured I might have got this wrong and they can fail in 49 | // different ways 50 | ip, err = findFreeARecord(&list, []string{"10.0.0.100/32,10.0.1.40/32"}) 51 | if err != nil { 52 | t.FailNow() 53 | } 54 | if ip != "10.0.1.1" { 55 | t.Errorf("T4: Expected 10.0.1.1 got %v", ip) 56 | } 57 | } 58 | -------------------------------------------------------------------------------- /internal/dns/google.go: -------------------------------------------------------------------------------- 1 | package dns 2 | 3 | import ( 4 | "errors" 5 | "os" 6 | "strconv" 7 | "strings" 8 | "time" 9 | 10 | "golang.org/x/oauth2" 11 | "golang.org/x/oauth2/google" 12 | gdns "google.golang.org/api/dns/v2beta1" 13 | ) 14 | 15 | // GoogleDNS stores all the options we need to talk to GCP 16 | type GoogleDNS struct { 17 | Creds []byte // the contents of our service account json credentials file 18 | Project string // the GCP project name we are operating on 19 | Zone string // the GCP DNS zone we are using 20 | } 21 | 22 | // googleRrsetCache is the cache of all the recordsets, figure if the system is in 23 | // regular use its better to store these for a few minutes than make a call each time 24 | type googleRrsetCache struct { 25 | Rrsets []*gdns.ResourceRecordSet 26 | CacheTime time.Time 27 | } 28 | 29 | var gcache googleRrsetCache 30 | 31 | // NewGoogleDNS will return our GCP DNS interface 32 | // The creds, project, and zone here are actually in the options as well, but they are important 33 | // enough to warrant being "top level" items 34 | func NewGoogleDNS(creds string, project string, zone string) *GoogleDNS { 35 | data, _ := os.ReadFile(creds) 36 | return &GoogleDNS{ 37 | Creds: data, 38 | Project: project, 39 | Zone: zone, 40 | } 41 | } 42 | 43 | // getDNSService takes the credentials and should return a GCP DNS Service, provided the creds are still good 44 | func (g *GoogleDNS) getDNSService() (*gdns.Service, error) { 45 | conf, err := google.JWTConfigFromJSON(g.Creds, "https://www.googleapis.com/auth/ndev.clouddns.readwrite") 46 | if err != nil { 47 | return nil, errors.New("Could not create Google JWT config: " + err.Error()) 48 | } 49 | 50 | client := conf.Client(oauth2.NoContext) 51 | 52 | s, err := gdns.New(client) 53 | if err != nil { 54 | return nil, errors.New("Could not make Google DNS service: " + err.Error()) 55 | } 56 | 57 | return s, nil 58 | } 59 | 60 | // getZoneRecordSet either returns our cache of records or fetches new ones. 61 | // This is recursive if we run into pagination 62 | func (g *GoogleDNS) getZoneRecordSet(token string) error { 63 | if token == "" && gcache.CacheTime != (time.Time{}) { 64 | now := time.Now() 65 | if now.Sub(gcache.CacheTime).Seconds() <= 30 { 66 | return nil 67 | } 68 | gcache = googleRrsetCache{} 69 | } 70 | 71 | service, err := g.getDNSService() 72 | if err != nil { 73 | return err 74 | } 75 | 76 | rrs := gdns.NewResourceRecordSetsService(service) 77 | rrsl := rrs.List(g.Project, g.Zone) 78 | if token != "" { 79 | rrsl = rrsl.PageToken(token) 80 | } 81 | resp, err := rrsl.Do() 82 | if err != nil { 83 | return errors.New("Error fetching record set: " + err.Error()) 84 | } 85 | 86 | gcache.Rrsets = append(gcache.Rrsets, resp.Rrsets...) 87 | gcache.CacheTime = time.Now() 88 | if resp.NextPageToken != "" { 89 | return g.getZoneRecordSet(resp.NextPageToken) 90 | } 91 | 92 | return nil 93 | } 94 | 95 | // createARecord creates the entry in GCP 96 | func (g *GoogleDNS) createARecord(name string, ip string) error { 97 | service, err := g.getDNSService() 98 | if err != nil { 99 | return err 100 | } 101 | 102 | // This is all internal so this should be safe, but check anyway, if it doesn't have a . assume we need to 103 | // append the zone name to our hostname, name needs to end in . for GCP to accept it 104 | if !strings.Contains(name, ".") { 105 | name = name + "." + DNSOptions.Zone + "." 106 | } 107 | 108 | recordset := gdns.ResourceRecordSet{ 109 | Kind: "dns#resourceRecordSet", 110 | Name: name, 111 | Rrdatas: []string{ip}, 112 | Ttl: int64(DNSOptions.TTL), 113 | Type: "A", 114 | } 115 | 116 | // Standard GCP API usage is make the change opject, ask for a change service based on our overall service 117 | // then pass the change to the change service to perform the operation 118 | change := gdns.Change{ 119 | Kind: "dns#change", 120 | Additions: []*gdns.ResourceRecordSet{&recordset}, 121 | } 122 | 123 | cs := gdns.NewChangesService(service) 124 | ccc := cs.Create(g.Project, g.Zone, &change) 125 | _, err = ccc.Do() 126 | return err // will either be an error or nil, either way what we want to return at this point 127 | } 128 | 129 | // deleteARecord removes the host from DNS. At the moment it removes all the records for the host, so the 130 | // name is a little bit misleading. It does this by pulling the records sets into cache, and just matching 131 | // the correct record set by name, and passing that back as a deletion. 132 | func (g *GoogleDNS) deleteARecord(name string) error { 133 | service, err := g.getDNSService() 134 | if err != nil { 135 | return err 136 | } 137 | 138 | // Make sure our cache is up to date 139 | err = g.getZoneRecordSet("") 140 | if err != nil { 141 | return err 142 | } 143 | 144 | // Like in createARecord, if we don't have a . in the name assume we need to append everything. I think 145 | // ideally we should reject hostnames with a . in them and just force us to the the arbiter of a good name 146 | if !strings.Contains(name, ".") { 147 | name = name + "." + DNSOptions.Zone + "." 148 | } 149 | 150 | // Loop over our cache and grab the recordset by name, we will pass this to our delete request 151 | var rrset *gdns.ResourceRecordSet 152 | for _, set := range gcache.Rrsets { 153 | if set.Name == name { 154 | rrset = set 155 | break 156 | } 157 | } 158 | 159 | // if we found a record set, remove it 160 | if rrset != nil { 161 | change := gdns.Change{ 162 | Kind: "dns#change", 163 | Deletions: []*gdns.ResourceRecordSet{rrset}, 164 | } 165 | 166 | cs := gdns.NewChangesService(service) 167 | ccc := cs.Create(g.Project, g.Zone, &change) 168 | _, err = ccc.Do() 169 | if err != nil { 170 | return err 171 | } 172 | 173 | // Pop the cache instead of trying to be clever 174 | gcache.CacheTime = time.Time{} 175 | } 176 | 177 | return nil 178 | } 179 | 180 | // GetARecord returns an A record for our host. If the host already has one, 181 | // this will return the first record encountered, it does not currently ensure that 182 | // record is in the network we are asking for. If there is no existing record, it will 183 | // loop over a 3 dimensional array looking for a free entry to use. 184 | func (g *GoogleDNS) GetARecord(name string, networkBlocks []string) (string, error) { 185 | // Make sure our cache is up to date 186 | err := g.getZoneRecordSet("") 187 | if err != nil { 188 | return "", err 189 | } 190 | 191 | // Make sure we are looking for the fqdn 192 | if !strings.Contains(name, ".") { 193 | name = name + "." + DNSOptions.Zone + "." 194 | } 195 | 196 | // This is going to "mark off" all the records we have, so then we can loop over it and find a free spot 197 | var list [256][256][256]int 198 | for _, set := range gcache.Rrsets { 199 | if set.Type == "A" { 200 | for _, ip := range set.Rrdatas { 201 | octets := strings.Split(ip, ".") 202 | o2, _ := strconv.Atoi(octets[1]) 203 | o3, _ := strconv.Atoi(octets[2]) 204 | o4, _ := strconv.Atoi(octets[3]) 205 | list[o2][o3][o4] = 1 206 | } 207 | 208 | // We already have our host in DNS 209 | if set.Name == name { 210 | return set.Rrdatas[0], nil 211 | } 212 | } 213 | } 214 | 215 | ip, err := findFreeARecord(&list, networkBlocks) 216 | if err != nil { 217 | return "", err 218 | } 219 | 220 | err = g.createARecord(name, ip) 221 | // just return the IP we found and err which will be an error or nil, as one should check that first 222 | return ip, err 223 | } 224 | 225 | // RemoveARecord passes our name to deleteARecord as it doesn't have to do any additional processing 226 | func (g *GoogleDNS) RemoveARecord(name string) error { 227 | err := g.deleteARecord(name) 228 | return err 229 | } 230 | 231 | // ListARecords repopulates the internal cache and then appends any A records it finds to a 232 | /// RecordList array and returns that 233 | func (g *GoogleDNS) ListARecords() ([]RecordList, error) { 234 | var list []RecordList 235 | 236 | // Make sure our cache is up to date 237 | err := g.getZoneRecordSet("") 238 | if err != nil { 239 | return list, err 240 | } 241 | 242 | for _, set := range gcache.Rrsets { 243 | if set.Type == "A" { 244 | list = append(list, RecordList{Name: set.Name, RecordSet: set.Rrdatas}) 245 | } 246 | } 247 | 248 | return list, nil 249 | } 250 | -------------------------------------------------------------------------------- /internal/handlers/handler_404.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "bytes" 5 | "fmt" 6 | "net/http" 7 | ) 8 | 9 | // FourOhFourHandler is our 404 response 10 | func FourOhFourHandler(w http.ResponseWriter, r *http.Request) { 11 | w.Header().Set("Content-Type", "text/html") 12 | 13 | tmpl := readTemplate("404.tmpl") 14 | 15 | var out bytes.Buffer 16 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 17 | "Page": "none", 18 | }) 19 | 20 | fmt.Fprintf(w, string(out.Bytes())) 21 | } 22 | -------------------------------------------------------------------------------- /internal/handlers/handler_containers.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "bytes" 5 | "encoding/json" 6 | "fmt" 7 | "html/template" 8 | "log" 9 | "net/http" 10 | "regexp" 11 | "strings" 12 | 13 | "github.com/neophenix/lxdepot/internal/lxd" 14 | ) 15 | 16 | // ContainerListHandler handles requests for /containers 17 | func ContainerListHandler(w http.ResponseWriter, r *http.Request) { 18 | w.Header().Set("Content-Type", "text/html") 19 | 20 | containerInfo, err := lxd.GetContainers("", "", true) 21 | if err != nil { 22 | log.Printf("Could not get container list %s\n", err.Error()) 23 | } 24 | 25 | tmpl := readTemplate("container_list.tmpl") 26 | 27 | var out bytes.Buffer 28 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 29 | "Page": "containers", 30 | "Containers": containerInfo, 31 | }) 32 | 33 | fmt.Fprintf(w, string(out.Bytes())) 34 | } 35 | 36 | // ContainerHostListHandler handles requests for /containers/HOST 37 | func ContainerHostListHandler(w http.ResponseWriter, r *http.Request) { 38 | w.Header().Set("Content-Type", "text/html") 39 | 40 | reg := regexp.MustCompile("/containers/(?P[^:]+)") 41 | match := reg.FindStringSubmatch(r.URL.Path) 42 | 43 | if len(match) != 2 { 44 | FourOhFourHandler(w, r) 45 | return 46 | } 47 | 48 | // Check that the host is actually one we have configured for use 49 | found := false 50 | for _, lxdh := range Conf.LXDhosts { 51 | if lxdh.Host == match[1] { 52 | found = true 53 | } 54 | } 55 | if !found { 56 | FourOhFourHandler(w, r) 57 | return 58 | } 59 | 60 | containerInfo, err := lxd.GetContainers(match[1], "", true) 61 | if err != nil { 62 | log.Printf("Could not get container list %s\n", err.Error()) 63 | } 64 | 65 | tmpl := readTemplate("container_list.tmpl") 66 | 67 | var out bytes.Buffer 68 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 69 | "Page": "containers", 70 | "Containers": containerInfo, 71 | }) 72 | 73 | fmt.Fprintf(w, string(out.Bytes())) 74 | } 75 | 76 | // ContainerHandler handles requests for /container/HOST:NAME 77 | func ContainerHandler(w http.ResponseWriter, r *http.Request) { 78 | w.Header().Set("Content-Type", "text/html") 79 | 80 | reg := regexp.MustCompile("/container/(?P[^:]+):(?P.+)") 81 | match := reg.FindStringSubmatch(r.URL.Path) 82 | 83 | if len(match) != 3 { 84 | FourOhFourHandler(w, r) 85 | return 86 | } 87 | 88 | containerInfo, err := lxd.GetContainers(match[1], match[2], true) 89 | if err != nil { 90 | log.Printf("Could not get container list %s\n", err.Error()) 91 | } 92 | if len(containerInfo) == 0 { 93 | FourOhFourHandler(w, r) 94 | return 95 | } 96 | 97 | // Check to see if we have a bootstrap section and playbooks section for 98 | // this OS, if we do, built a list of those items for the UI to list off 99 | // to the user as options to run 100 | var playbooks []string 101 | os := strings.ToLower(containerInfo[0].Container.ExpandedConfig["image.os"] + containerInfo[0].Container.ExpandedConfig["image.release"]) 102 | if pbs, ok := Conf.Playbooks[os]; ok { 103 | for name := range pbs { 104 | playbooks = append(playbooks, name) 105 | } 106 | } 107 | if _, ok := Conf.Bootstrap[os]; ok { 108 | playbooks = append(playbooks, "bootstrap") 109 | } 110 | 111 | tmpl := readTemplate("container.tmpl") 112 | 113 | var out bytes.Buffer 114 | err = tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 115 | "Page": "containers", 116 | "Conf": Conf, 117 | "Container": containerInfo[0], 118 | "Playbooks": playbooks, 119 | }) 120 | if err != nil { 121 | log.Printf("%v\n", err.Error()) 122 | } 123 | 124 | fmt.Fprintf(w, string(out.Bytes())) 125 | } 126 | 127 | // NewContainerHandler handles requests for /container/new 128 | func NewContainerHandler(w http.ResponseWriter, r *http.Request) { 129 | w.Header().Set("Content-Type", "text/html") 130 | 131 | // images we want to map to host -> image alias so we can use JS 132 | // in the template to make sure we only select an image on the selected host 133 | images, err := lxd.GetImages("") 134 | if err != nil { 135 | log.Printf("Could not get image list %s\n", err.Error()) 136 | } 137 | imageMap := make(map[string][]string) 138 | for _, image := range images { 139 | imageMap[image.Host.Host] = append(imageMap[image.Host.Host], image.Aliases[0].Name) 140 | } 141 | imageJSON, err := json.Marshal(imageMap) 142 | if err != nil { 143 | log.Printf("Could not JSONify images: %s\n", err.Error()) 144 | } 145 | 146 | // Like the images, we are going to get a mapping of host resources and then 147 | // convert that to JSON to give the template something to work with 148 | hostResourceMap, err := lxd.GetHostResources("") 149 | if err != nil { 150 | log.Printf("Could not get host resource list %s\n", err.Error()) 151 | } 152 | 153 | hostResourceJSON, err := json.Marshal(hostResourceMap) 154 | if err != nil { 155 | log.Printf("Could not JSONify host resource info %s\n", err.Error()) 156 | } 157 | 158 | // Now grab the list of available storage pools so we can select that on creation 159 | hostStoragePools, err := lxd.GetStoragePools("") 160 | if err != nil { 161 | log.Printf("Could not get storage pools %s\n", err.Error()) 162 | } 163 | 164 | hostStorageJSON, err := json.Marshal(hostStoragePools) 165 | if err != nil { 166 | log.Printf("Could not JSONify storage pools %s\n", err.Error()) 167 | } 168 | 169 | tmpl := readTemplate("container_new.tmpl") 170 | 171 | var out bytes.Buffer 172 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 173 | "Page": "containers", 174 | "Conf": Conf, 175 | "ImageJSON": template.JS(imageJSON), 176 | "HostResourceJSON": template.JS(hostResourceJSON), 177 | "HostStorageJSON": template.JS(hostStorageJSON), 178 | }) 179 | 180 | fmt.Fprintf(w, string(out.Bytes())) 181 | } 182 | -------------------------------------------------------------------------------- /internal/handlers/handler_hosts.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "bytes" 5 | "fmt" 6 | "log" 7 | "net/http" 8 | 9 | "github.com/neophenix/lxdepot/internal/lxd" 10 | ) 11 | 12 | // HostListHandler handles requests for /hosts 13 | func HostListHandler(w http.ResponseWriter, r *http.Request) { 14 | w.Header().Set("Content-Type", "text/html") 15 | 16 | hostResourceMap, err := lxd.GetHostResources("") 17 | if err != nil { 18 | log.Printf("Could not get host resource list %s\n", err.Error()) 19 | } 20 | 21 | // host -> container info mapping 22 | hostContainerInfo := make(map[string]map[string]int) 23 | // Grab container info without state to see installed vs runnings 24 | containerInfo, err := lxd.GetContainers("", "", false) 25 | if err != nil { 26 | log.Printf("Could not get container list %s\n", err.Error()) 27 | } 28 | 29 | // Check the status of each container and increment the counter, if we haven't 30 | // seen this host before make the map we need 31 | for _, container := range containerInfo { 32 | if hostContainerInfo[container.Host.Host] == nil { 33 | hostContainerInfo[container.Host.Host] = make(map[string]int) 34 | } 35 | hostContainerInfo[container.Host.Host]["total"]++ 36 | 37 | if container.Container.Status == "Running" { 38 | hostContainerInfo[container.Host.Host]["running"]++ 39 | } else { 40 | hostContainerInfo[container.Host.Host]["stopped"]++ 41 | } 42 | } 43 | 44 | tmpl := readTemplate("host_list.tmpl") 45 | 46 | var out bytes.Buffer 47 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 48 | "Page": "hosts", 49 | "Conf": Conf, 50 | "HostResourceMap": hostResourceMap, 51 | "HostContainerInfo": hostContainerInfo, 52 | }) 53 | 54 | fmt.Fprintf(w, string(out.Bytes())) 55 | } 56 | -------------------------------------------------------------------------------- /internal/handlers/handler_images.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "bytes" 5 | "fmt" 6 | "log" 7 | "net/http" 8 | 9 | "github.com/neophenix/lxdepot/internal/lxd" 10 | ) 11 | 12 | // ImageListHandler handles requests for /images 13 | func ImageListHandler(w http.ResponseWriter, r *http.Request) { 14 | w.Header().Set("Content-Type", "text/html") 15 | 16 | images, err := lxd.GetImages("") 17 | if err != nil { 18 | log.Printf("Could not get image list %s\n", err.Error()) 19 | } 20 | 21 | tmpl := readTemplate("image_list.tmpl") 22 | 23 | var out bytes.Buffer 24 | tmpl.ExecuteTemplate(&out, "base", map[string]interface{}{ 25 | "Page": "images", 26 | "Images": images, 27 | }) 28 | 29 | fmt.Fprintf(w, string(out.Bytes())) 30 | } 31 | -------------------------------------------------------------------------------- /internal/handlers/handler_root.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "log" 5 | "net/http" 6 | ) 7 | 8 | // RootHandler handles requests for everything, and then compares the requested URL 9 | // to our array of routes, the first match wins and we call that handler. Requests 10 | // for / are shown a container list, anything not found is 404'd 11 | func RootHandler(w http.ResponseWriter, r *http.Request) { 12 | log.Println(r.Method, r.URL.Path, r.RemoteAddr) 13 | handler := GetRouteHandler(r.URL.Path) 14 | if handler != nil { 15 | handler(w, r) 16 | return 17 | } 18 | 19 | // Special case if we go to just / 20 | if r.URL.Path == "/" { 21 | ContainerListHandler(w, r) 22 | return 23 | } 24 | 25 | FourOhFourHandler(w, r) 26 | } 27 | -------------------------------------------------------------------------------- /internal/handlers/handlers.go: -------------------------------------------------------------------------------- 1 | // Package handlers is where all the "normal" web handlers are defined 2 | package handlers 3 | 4 | import ( 5 | "github.com/neophenix/lxdepot/internal/config" 6 | ) 7 | 8 | // Conf is our main config 9 | var Conf *config.Config 10 | -------------------------------------------------------------------------------- /internal/handlers/router.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "net/http" 5 | "regexp" 6 | ) 7 | 8 | // Route holds all our routing rules 9 | type Route struct { 10 | Regex *regexp.Regexp // a regex to compare the request path to 11 | Handler func(w http.ResponseWriter, r *http.Request) // a func pointer to call if the regex matches 12 | } 13 | 14 | // Routes is the array in the order we will attempt to match the route with the incoming url, first one wins 15 | var Routes []Route 16 | 17 | // AddRoute compiles the regex string and appends it to our route list with its handler func pointer 18 | func AddRoute(regex string, f func(w http.ResponseWriter, r *http.Request)) { 19 | Routes = append(Routes, Route{Regex: regexp.MustCompile(regex), Handler: f}) 20 | } 21 | 22 | // GetRouteHandler compares the path string to the route list and returns the handler pointer if found or nil 23 | func GetRouteHandler(path string) func(w http.ResponseWriter, r *http.Request) { 24 | for _, route := range Routes { 25 | if route.Regex.MatchString(path) { 26 | return route.Handler 27 | } 28 | } 29 | 30 | return nil 31 | } 32 | -------------------------------------------------------------------------------- /internal/handlers/templates.go: -------------------------------------------------------------------------------- 1 | package handlers 2 | 3 | import ( 4 | "html/template" 5 | "log" 6 | 7 | "github.com/neophenix/lxdepot/internal/utils" 8 | ) 9 | 10 | // WebRoot is the path to the web templates + static files 11 | var WebRoot string 12 | 13 | // CacheTemplates is the setting on whether to cache the template files or read from disk each time 14 | var CacheTemplates bool 15 | 16 | // template cache 17 | var templates = make(map[string]*template.Template) 18 | 19 | // readTemplate is used by the various handlers to read the template file off disk, or return 20 | // the template from cache if we already did that. -cache_templates=false can be passed on the 21 | // command line to always read off disk, useful for developing 22 | func readTemplate(filename string) *template.Template { 23 | if CacheTemplates { 24 | if tmpl, ok := templates[filename]; ok { 25 | return tmpl 26 | } 27 | } 28 | 29 | // Until I find this is bad, I'm just going to always pass these functions into the template to simplify code. 30 | funcs := template.FuncMap{ 31 | "MakeBytesMoreHuman": utils.MakeBytesMoreHuman, 32 | "MakeIntBytesMoreHuman": utils.MakeIntBytesMoreHuman, 33 | } 34 | 35 | // web templates always have the base.tmpl that provides the overall layout, and then the requested template 36 | // provides all the content 37 | t, err := template.New(filename).Funcs(funcs).ParseFiles(WebRoot+"/templates/base.tmpl", WebRoot+"/templates/"+filename) 38 | if err != nil { 39 | log.Fatal("Could not open template: " + WebRoot + "/" + filename + " : " + err.Error()) 40 | } 41 | 42 | // drop the template in cache for later 43 | templates[filename] = t 44 | 45 | return t 46 | } 47 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_containerplaybook.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "strings" 5 | "time" 6 | 7 | "github.com/neophenix/lxdepot/internal/circularbuffer" 8 | "github.com/neophenix/lxdepot/internal/lxd" 9 | ) 10 | 11 | // ContainerPlaybookHandler handles requests to run various playbooks on the container, including 12 | // re-bootstrapping it if asked. Playbooks and bootstrap should be idempotent so no harm should come 13 | // from running these multiple times. 14 | func ContainerPlaybookHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) { 15 | containerInfo, err := lxd.GetContainers(msg.Data["host"], msg.Data["name"], false) 16 | if err != nil { 17 | id := time.Now().UnixNano() 18 | if buffer != nil { 19 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed to get container info: " + err.Error(), Success: false}) 20 | } 21 | return 22 | } 23 | 24 | // Check first to make sure the container exists and we are allowed to manage it 25 | if len(containerInfo) > 0 { 26 | if !lxd.IsManageable(containerInfo[0]) { 27 | id := time.Now().UnixNano() 28 | if buffer != nil { 29 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "lock flag set, remote management denied", Success: false}) 30 | } 31 | return 32 | } 33 | } else { 34 | id := time.Now().UnixNano() 35 | if buffer != nil { 36 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "container does not exist", Success: false}) 37 | } 38 | return 39 | } 40 | 41 | os := strings.ToLower(containerInfo[0].Container.ExpandedConfig["image.os"] + containerInfo[0].Container.ExpandedConfig["image.release"]) 42 | // bootstrap is a special playbook in that it has its own section of the config. If we are asked to 43 | // do this again, just call the bootstrap "handler" in handler_createcontainer 44 | if msg.Data["playbook"] == "bootstrap" { 45 | BootstrapContainer(buffer, msg.Data["host"], msg.Data["name"]) 46 | } else if playbooks, ok := Conf.Playbooks[os]; ok { 47 | if playbook, ok := playbooks[msg.Data["playbook"]]; ok { 48 | // Once we are sure the OS for this image exists in or config and we have the requested playbook 49 | // run it in basically the same fashion we run a boostrap 50 | go func() { 51 | for _, step := range playbook { 52 | // depending on the type, call the appropriate helper 53 | if step.Type == "file" { 54 | err = containerCreateFile(buffer, msg.Data["host"], msg.Data["name"], step) 55 | if err != nil { 56 | return 57 | } 58 | } else if step.Type == "command" { 59 | err = containerExecCommand(buffer, msg.Data["host"], msg.Data["name"], step) 60 | if err != nil { 61 | return 62 | } 63 | } 64 | } 65 | }() 66 | } 67 | } 68 | } 69 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_createcontainer.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "bytes" 5 | "encoding/json" 6 | "strings" 7 | "text/template" 8 | "time" 9 | 10 | "github.com/neophenix/lxdepot/internal/circularbuffer" 11 | "github.com/neophenix/lxdepot/internal/dns" 12 | "github.com/neophenix/lxdepot/internal/lxd" 13 | ) 14 | 15 | // CreateContainerHandler creates the container on our host, then if we are using a 3rd 16 | // party DNS gets an A record from there. 17 | // It then uploads the appropriate network config file to the container before starting it by calling setupContainerNetwork 18 | // Finally if any bootstrapping configuration is set, it to perform that by calling BootstrapContainer. 19 | func CreateContainerHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) { 20 | // Create the container 21 | // ------------------------- 22 | id := time.Now().UnixNano() 23 | if buffer != nil { 24 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Creating container", Success: true}) 25 | } 26 | 27 | // options will be whatever the user wants set like container limits, priority, etc. Its 28 | // called config in LXD land, but since we use config for our config I'm calling it options in here 29 | var options map[string]string 30 | err := json.Unmarshal([]byte(msg.Data["options"]), &options) 31 | if err != nil { 32 | if buffer != nil { 33 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 34 | } 35 | return 36 | } 37 | 38 | err = lxd.CreateContainer(msg.Data["host"], msg.Data["name"], msg.Data["image"], msg.Data["storagepool"], options) 39 | if err != nil { 40 | if buffer != nil { 41 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 42 | } 43 | return 44 | } 45 | if buffer != nil { 46 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 47 | } 48 | // ------------------------- 49 | 50 | // DNS Previously we would fail here and continue, but that has been shown to lead to multiple containers being assigned 51 | // the same IP, which turns out is a bad idea. So now we will fail, and let the user cleanup. 52 | // ------------------------- 53 | if strings.ToLower(Conf.DNS.Provider) != "dhcp" { 54 | id := time.Now().UnixNano() 55 | if buffer != nil { 56 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Creating DNS entry", Success: true}) 57 | } 58 | 59 | d := dns.New(Conf) 60 | if d == nil { 61 | if buffer != nil { 62 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed to create DNS object for provider: " + Conf.DNS.Provider, Success: false}) 63 | } 64 | return 65 | } else { 66 | ip, err := d.GetARecord(msg.Data["name"], Conf.DNS.NetworkBlocks) 67 | if err != nil { 68 | if buffer != nil { 69 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 70 | } 71 | return 72 | } else { 73 | if buffer != nil { 74 | buffer.Enqueue(OutgoingMessage{ID: id, Message: ip, Success: true}) 75 | } 76 | 77 | // upload our network config 78 | setupContainerNetwork(buffer, msg.Data["host"], msg.Data["name"], ip) 79 | } 80 | } 81 | } 82 | // ------------------------- 83 | 84 | // Start the container 85 | err = StartContainerHandler(buffer, msg) 86 | if err != nil { 87 | // The other handler would have taken care of the message 88 | return 89 | } 90 | 91 | id = time.Now().UnixNano() 92 | if buffer != nil { 93 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Waiting for networking", Success: true}) 94 | } 95 | 96 | // We will try 10 times to see if the networking comes up by asking LXD for the container state 97 | // and checking to see if we found an ipv4 address 98 | networkUp := false 99 | i := 0 100 | for !networkUp && i < 10 { 101 | // this isn't exactly as efficient as it could be but don't feel like making a new call just for this at the moment 102 | containerInfo, err := lxd.GetContainers(msg.Data["host"], msg.Data["name"], true) 103 | if err != nil { 104 | if buffer != nil { 105 | buffer.Enqueue(OutgoingMessage{ID: id, Message: err.Error(), Success: false}) 106 | } 107 | return 108 | } 109 | // look through the container state for an address in the inet family, right not we aren't worried about comparing 110 | // this address to what we got from DNS if we are using that, maybe in the future if it becomes an issue 111 | for iface, info := range containerInfo[0].State.Network { 112 | if iface != "lo" { 113 | for _, addr := range info.Addresses { 114 | if addr.Family == "inet" && addr.Address != "" { 115 | networkUp = true 116 | } 117 | } 118 | } 119 | } 120 | i++ 121 | time.Sleep(1 * time.Second) 122 | } 123 | if !networkUp { 124 | // we will bail if we didn't get an address since if we plan on bootstrapping we won't get far 125 | if buffer != nil { 126 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "no ip detected", Success: false}) 127 | } 128 | return 129 | } 130 | 131 | if buffer != nil { 132 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "network is up", Success: true}) 133 | } 134 | 135 | BootstrapContainer(buffer, msg.Data["host"], msg.Data["name"]) 136 | } 137 | 138 | // setupContainerNetwork looks at the OS of a container and then looks up any network template in our config. 139 | // It then parses that template through text/template passing the IP and uploads it to the container 140 | func setupContainerNetwork(buffer *circularbuffer.CircularBuffer[OutgoingMessage], host string, name string, ip string) { 141 | id := time.Now().UnixNano() 142 | if buffer != nil { 143 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Configuring container networking", Success: true}) 144 | } 145 | 146 | // Even though we just created it, lxd doesn't give us a lot of info back about the image, etc. 147 | // hell the GetImages call doesn't give us back a lot either. So we are going to pull the container state 148 | // to be able to figure out what OS we are on, so we can then use the right network setup 149 | containerInfo, err := lxd.GetContainers(host, name, true) 150 | if err != nil { 151 | if buffer != nil { 152 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 153 | } 154 | return 155 | } 156 | 157 | // Given the OS reported by LXD, check to see if we have any networking config defined, and if so loop 158 | // over that array of templates and upload each one 159 | os := strings.ToLower(containerInfo[0].Container.ExpandedConfig["image.os"] + containerInfo[0].Container.ExpandedConfig["image.release"]) 160 | if networking, ok := Conf.Networking[os]; ok { 161 | for _, file := range networking { 162 | var contents bytes.Buffer 163 | tmpl, err := template.New(file.RemotePath).Parse(file.Template) 164 | if err != nil { 165 | if buffer != nil { 166 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 167 | } 168 | return 169 | } 170 | tmpl.Execute(&contents, map[string]interface{}{ 171 | "IP": ip, 172 | }) 173 | 174 | err = lxd.CreateFile(host, name, file.RemotePath, 0644, contents.String()) 175 | if err != nil { 176 | if buffer != nil { 177 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 178 | } 179 | return 180 | } 181 | if buffer != nil { 182 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 183 | } 184 | } 185 | } 186 | } 187 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_deletecontainer.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "strings" 5 | "time" 6 | 7 | "github.com/neophenix/lxdepot/internal/circularbuffer" 8 | "github.com/neophenix/lxdepot/internal/dns" 9 | "github.com/neophenix/lxdepot/internal/lxd" 10 | ) 11 | 12 | // DeleteContainerHandler first stops a running container (there is no force like the lxc command line), 13 | // then deletes any DNS entry for it from our 3rd party, and then deletes the container. 14 | func DeleteContainerHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) { 15 | // Stop the container 16 | err := StopContainerHandler(buffer, msg) 17 | if err != nil { 18 | // The other handler would have taken care of the message 19 | return 20 | } 21 | 22 | // Delete the container, moved to before DNS since if we fail to delete the container after 23 | // we remove DNS and someone makes a new container we could end up with multiple containers 24 | // on the network with the same IP and thats more annoying than alternatives 25 | id := time.Now().UnixNano() 26 | if buffer != nil { 27 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Deleting container", Success: true}) 28 | } 29 | 30 | err = lxd.DeleteContainer(msg.Data["host"], msg.Data["name"]) 31 | if err != nil { 32 | if buffer != nil { 33 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 34 | } 35 | return 36 | } 37 | 38 | // DNS if we aren't using DHCP 39 | if strings.ToLower(Conf.DNS.Provider) != "dhcp" { 40 | id := time.Now().UnixNano() 41 | if buffer != nil { 42 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Deleting DNS entry", Success: true}) 43 | } 44 | 45 | d := dns.New(Conf) 46 | if d == nil { 47 | if buffer != nil { 48 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed to create DNS object for provider: " + Conf.DNS.Provider, Success: false}) 49 | } 50 | } else { 51 | err := d.RemoveARecord(msg.Data["name"]) 52 | if err != nil { 53 | if buffer != nil { 54 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 55 | } 56 | } else { 57 | if buffer != nil { 58 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 59 | } 60 | } 61 | } 62 | } 63 | 64 | if buffer != nil { 65 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true, Redirect: "/containers"}) 66 | buffer.Enqueue(OutgoingMessage{Redirect: "/containers"}) 67 | } 68 | } 69 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_movecontainer.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "time" 5 | 6 | "github.com/neophenix/lxdepot/internal/circularbuffer" 7 | "github.com/neophenix/lxdepot/internal/lxd" 8 | ) 9 | 10 | // MoveContainerHandler wraps lxd.MoveContainer and reports any errors it returns 11 | func MoveContainerHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) { 12 | id := time.Now().UnixNano() 13 | if buffer != nil { 14 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Moving container", Success: true}) 15 | } 16 | 17 | err := lxd.MoveContainer(msg.Data["host"], msg.Data["dst_host"], msg.Data["name"]) 18 | if err != nil { 19 | if buffer != nil { 20 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 21 | } 22 | return 23 | } 24 | 25 | if buffer != nil { 26 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: false}) 27 | buffer.Enqueue(OutgoingMessage{Redirect: "/container/" + msg.Data["host"] + ":" + msg.Data["name"]}) 28 | } 29 | } 30 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_startcontainer.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "time" 5 | 6 | "github.com/neophenix/lxdepot/internal/circularbuffer" 7 | "github.com/neophenix/lxdepot/internal/lxd" 8 | ) 9 | 10 | // StartContainerHandler starts a stopped container 11 | func StartContainerHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) error { 12 | // Start the container 13 | id := time.Now().UnixNano() 14 | if buffer != nil { 15 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Starting container", Success: true}) 16 | } 17 | 18 | err := lxd.StartContainer(msg.Data["host"], msg.Data["name"]) 19 | if err != nil { 20 | if buffer != nil { 21 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 22 | } 23 | return err 24 | } 25 | 26 | if buffer != nil { 27 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 28 | buffer.Enqueue(OutgoingMessage{Redirect: "/container/" + msg.Data["host"] + ":" + msg.Data["name"]}) 29 | } 30 | 31 | return nil 32 | } 33 | -------------------------------------------------------------------------------- /internal/handlers/ws/handler_stopcontainer.go: -------------------------------------------------------------------------------- 1 | package ws 2 | 3 | import ( 4 | "time" 5 | 6 | "github.com/neophenix/lxdepot/internal/circularbuffer" 7 | "github.com/neophenix/lxdepot/internal/lxd" 8 | ) 9 | 10 | // StopContainerHandler stops a running container 11 | func StopContainerHandler(buffer *circularbuffer.CircularBuffer[OutgoingMessage], msg IncomingMessage) error { 12 | // Stop the container 13 | id := time.Now().UnixNano() 14 | if buffer != nil { 15 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Stopping container", Success: true}) 16 | } 17 | 18 | err := lxd.StopContainer(msg.Data["host"], msg.Data["name"]) 19 | if err != nil { 20 | if buffer != nil { 21 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 22 | } 23 | return err 24 | } 25 | 26 | if buffer != nil { 27 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 28 | buffer.Enqueue(OutgoingMessage{Redirect: "/container/" + msg.Data["host"] + ":" + msg.Data["name"]}) 29 | } 30 | 31 | return nil 32 | } 33 | -------------------------------------------------------------------------------- /internal/handlers/ws/ws.go: -------------------------------------------------------------------------------- 1 | // Package ws is for our websocket handlers 2 | // All the websocket handlers send 2 messages to the UI. 3 | // The first is what we are attempting, running a command, etc. The next is the status or output of that item 4 | package ws 5 | 6 | import ( 7 | "encoding/json" 8 | "errors" 9 | "fmt" 10 | "log" 11 | "net/http" 12 | "os" 13 | "strings" 14 | "sync" 15 | "time" 16 | 17 | "github.com/gorilla/websocket" 18 | "github.com/neophenix/lxdepot/internal/circularbuffer" 19 | "github.com/neophenix/lxdepot/internal/config" 20 | "github.com/neophenix/lxdepot/internal/lxd" 21 | ) 22 | 23 | // IncomingMessage is for messages from the client to us 24 | type IncomingMessage struct { 25 | Action string `json:"action"` // what type of request: create, start, etc. 26 | BrowserID string `json:"id"` // ID of our users browser 27 | Data map[string]string `json:"data"` // in the UI this is a single level JSON object so requests can have varying options 28 | } 29 | 30 | // OutgoingMessage is from us to the UI 31 | type OutgoingMessage struct { 32 | ID int64 // ID to keep messages and their status together 33 | Message string // message to show the user 34 | Success bool // success is used to give a visual hint to the user how the command went (true = green, false = red) 35 | Redirect string // If we want to suggest a redirect to another page, like back to /containers after we create a new one 36 | } 37 | 38 | // MessageBuffer will house our outgoing messages so clients can navigate around and get updates 39 | var MessageBuffer = map[string]*circularbuffer.CircularBuffer[OutgoingMessage]{} 40 | 41 | // mutex for our message buffer map 42 | var mutex = &sync.RWMutex{} 43 | 44 | // I need to see if I still need this, I think it was for when I was testing websockets using static assets served 45 | // elsewhere, I think it can be removed 46 | var upgrader = websocket.Upgrader{ 47 | CheckOrigin: func(r *http.Request) bool { return true }, 48 | } 49 | 50 | // Conf is our main config 51 | var Conf *config.Config 52 | 53 | // Handler is our overall websocket router, it unmarshals the request and then sends it to 54 | // the appropriate handler 55 | func Handler(w http.ResponseWriter, r *http.Request) { 56 | // upgrade to a websocket 57 | conn, err := upgrader.Upgrade(w, r, nil) 58 | if err != nil { 59 | log.Print("upgrade:", err) 60 | return 61 | } 62 | defer conn.Close() 63 | for { 64 | // read out message and unmarshal it, log out what it was for debugging. 65 | _, encmsg, err := conn.ReadMessage() 66 | if err != nil { 67 | log.Println("read:", err) 68 | break 69 | } 70 | log.Printf("ws recv: %s\n", encmsg) 71 | var msg IncomingMessage 72 | err = json.Unmarshal(encmsg, &msg) 73 | if err != nil { 74 | log.Println("unmarshal:", err) 75 | break 76 | } 77 | 78 | var buffer *circularbuffer.CircularBuffer[OutgoingMessage] 79 | if msg.BrowserID != "" && msg.BrowserID != "none" { 80 | // make sure we have a buffer setup for this id 81 | var ok bool 82 | mutex.Lock() 83 | if buffer, ok = MessageBuffer[msg.BrowserID]; !ok { 84 | buffer = &circularbuffer.CircularBuffer[OutgoingMessage]{} 85 | MessageBuffer[msg.BrowserID] = buffer 86 | } 87 | mutex.Unlock() 88 | } 89 | 90 | // any action is going to kickstart consuming messages in the background 91 | go func() { 92 | consumeMessages(conn, buffer) 93 | }() 94 | 95 | // Action tells us what we want to do, so this is a pretty simple router for the various requests 96 | // Each handler should be in its own handler_* file in the ws package 97 | switch msg.Action { 98 | case "start": 99 | StartContainerHandler(buffer, msg) 100 | case "stop": 101 | StopContainerHandler(buffer, msg) 102 | case "create": 103 | CreateContainerHandler(buffer, msg) 104 | case "delete": 105 | DeleteContainerHandler(buffer, msg) 106 | case "move": 107 | MoveContainerHandler(buffer, msg) 108 | case "playbook": 109 | ContainerPlaybookHandler(buffer, msg) 110 | case "consume": 111 | // a noop since we always kickstart consuming when we get a message 112 | default: 113 | if buffer != nil { 114 | id := time.Now().UnixNano() 115 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Request not understood", Success: false}) 116 | } 117 | } 118 | } 119 | } 120 | 121 | func consumeMessages(conn *websocket.Conn, buffer *circularbuffer.CircularBuffer[OutgoingMessage]) { 122 | if buffer != nil { 123 | pingWait := 0 124 | for { 125 | msg, ok := buffer.Dequeue() 126 | if ok { 127 | data, err := json.Marshal(msg) 128 | if err == nil { 129 | // outgoing messages should always be of a TextMessage type 130 | err := conn.WriteMessage(websocket.TextMessage, data) 131 | if err != nil { 132 | // we lost the message, probably because the connection went away, so we will stop consuming 133 | break 134 | } 135 | } 136 | } else { 137 | pingWait++ 138 | // 4 since we sleep for 250ms then the number of seconds we want to wait 139 | if pingWait == 4*10 { 140 | err := conn.WriteMessage(websocket.PingMessage, nil) 141 | if err != nil { 142 | // connection likely broken here, stop consuming 143 | break 144 | } 145 | pingWait = 0 146 | } 147 | } 148 | // if we have nothing to consume, wait some amount of time, 1/4 second seems like a good start 149 | time.Sleep(250 * time.Millisecond) 150 | } 151 | } 152 | } 153 | 154 | // BootstrapContainer loops over all the FileOrCommand objects in the bootstrap section of the config 155 | // and performs each item sequentially 156 | func BootstrapContainer(buffer *circularbuffer.CircularBuffer[OutgoingMessage], host string, name string) { 157 | id := time.Now().UnixNano() 158 | if buffer != nil { 159 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Getting container state", Success: true}) 160 | } 161 | 162 | // Get the container state again, should probably just grab this once but for now lets be expensive 163 | containerInfo, err := lxd.GetContainers(host, name, true) 164 | if err != nil { 165 | if buffer != nil { 166 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 167 | } 168 | return 169 | } 170 | if buffer != nil { 171 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 172 | } 173 | 174 | // if we have a bootstrap section for this OS, run it 175 | os := strings.ToLower(containerInfo[0].Container.ExpandedConfig["image.os"] + containerInfo[0].Container.ExpandedConfig["image.release"]) 176 | if bootstrap, ok := Conf.Bootstrap[os]; ok { 177 | go func() { 178 | for _, step := range bootstrap { 179 | // depending on the type, call the appropriate helper 180 | if step.Type == "file" { 181 | err = containerCreateFile(buffer, host, name, step) 182 | if err != nil { 183 | return 184 | } 185 | } else if step.Type == "command" { 186 | err = containerExecCommand(buffer, host, name, step) 187 | if err != nil { 188 | return 189 | } 190 | } 191 | } 192 | if buffer != nil { 193 | buffer.Enqueue(OutgoingMessage{Redirect: "/container/" + host + ":" + name}) 194 | } 195 | }() 196 | } 197 | } 198 | 199 | // containerCreateFile operates on a Type = file bootstrap / playbook step. 200 | // If there is a local_path, it reads the contents of that file from disk. 201 | // The contents are then sent to the lxd.CreateFile with the path on the container and permissions to "do the right thing" 202 | func containerCreateFile(buffer *circularbuffer.CircularBuffer[OutgoingMessage], host string, name string, info config.FileOrCommand) error { 203 | id := time.Now().UnixNano() 204 | if buffer != nil { 205 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Creating " + info.RemotePath, Success: true}) 206 | } 207 | 208 | // log what we are doing so anyone looking at the server will know 209 | log.Printf("creating file on container %v: %v\n", name, info.RemotePath) 210 | 211 | var contents []byte 212 | var err error 213 | if info.LocalPath != "" { 214 | contents, err = os.ReadFile(info.LocalPath) 215 | if err != nil { 216 | if buffer != nil { 217 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 218 | } 219 | return err 220 | } 221 | } 222 | 223 | err = lxd.CreateFile(host, name, info.RemotePath, info.Perms, string(contents)) 224 | if err != nil { 225 | if buffer != nil { 226 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 227 | } 228 | return err 229 | } 230 | 231 | if buffer != nil { 232 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 233 | } 234 | return nil 235 | } 236 | 237 | // containerExecCommand operates on a Type = command bootstrap / playbook step. 238 | // This is really just a wrapper around lxd.ExecCommand 239 | func containerExecCommand(buffer *circularbuffer.CircularBuffer[OutgoingMessage], host string, name string, info config.FileOrCommand) error { 240 | id := time.Now().UnixNano() 241 | if buffer != nil { 242 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "Executing " + strings.Join(info.Command, " "), Success: true}) 243 | } 244 | 245 | // log what we are doing so anyone looking at the server will know 246 | log.Printf("running command on container %v: %v\n", name, info.Command) 247 | 248 | success := false 249 | attempt := 1 250 | var rv float64 251 | var err error 252 | for !success && attempt <= 2 { 253 | rv, err = lxd.ExecCommand(host, name, info.Command) 254 | if err != nil { 255 | if buffer != nil { 256 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "failed: " + err.Error(), Success: false}) 257 | } 258 | return err 259 | } 260 | 261 | // check our return value for real ok (0) or acceptable ok (info.OkReturnValues) 262 | if rv == 0 { 263 | success = true 264 | } else { 265 | for _, okrv := range info.OkReturnValues { 266 | if rv == okrv { 267 | success = true 268 | } 269 | } 270 | } 271 | 272 | attempt++ 273 | } 274 | 275 | if !success { 276 | if buffer != nil { 277 | buffer.Enqueue(OutgoingMessage{ID: id, Message: fmt.Sprintf("failed with return value: %v", rv), Success: false}) 278 | } 279 | return errors.New("command failed") 280 | } 281 | 282 | if buffer != nil { 283 | buffer.Enqueue(OutgoingMessage{ID: id, Message: "done", Success: true}) 284 | } 285 | return nil 286 | } 287 | 288 | // ManageBuffers starts a backend goroutine to periodically check our buffers and remove any that are old. 289 | func ManageBuffers() { 290 | ticker := time.NewTicker(24 * time.Hour) 291 | // normally we would have a channel to indicate we are done, but this will run until the main process exits 292 | go func() { 293 | for { 294 | select { 295 | case <-ticker.C: 296 | mutex.Lock() 297 | for id, buffer := range MessageBuffer { 298 | if !buffer.HasRecentAccess() { 299 | delete(MessageBuffer, id) 300 | } 301 | } 302 | mutex.Unlock() 303 | } 304 | } 305 | }() 306 | } 307 | -------------------------------------------------------------------------------- /internal/lxd/lxd.go: -------------------------------------------------------------------------------- 1 | // Package lxd is our wrapper to the official lxd client 2 | package lxd 3 | 4 | import ( 5 | "errors" 6 | "fmt" 7 | "io" 8 | "log" 9 | "math" 10 | "os" 11 | "strings" 12 | "time" 13 | 14 | lxd "github.com/lxc/lxd/client" 15 | "github.com/lxc/lxd/shared/api" 16 | "github.com/neophenix/lxdepot/internal/config" 17 | ) 18 | 19 | // Conf is our main config 20 | var Conf *config.Config 21 | 22 | // cache of connections to our LXD servers 23 | var lxdConnections = make(map[string]lxd.ContainerServer) 24 | 25 | // ContainerInfo is a conversion / grouping of useful container information as returned from the lxd client 26 | type ContainerInfo struct { 27 | Host *config.LXDhost // Host details 28 | Container api.Container // Container details returned from lxd.GetContainers 29 | State *api.ContainerState // Container state from lxd.GetContainerState 30 | Usage map[string]float64 // place to store usge conversions, like CPU usage 31 | } 32 | 33 | // ImageInfo like above is a grouping of useful image information for the frontend 34 | type ImageInfo struct { 35 | Host *config.LXDhost // Host details 36 | Aliases []api.ImageAlias // list of aliases this image goes by 37 | Architecture string // x86_64, etc 38 | Fingerprint string // fingerprint hash of the image for comparison 39 | } 40 | 41 | // HostResourceInfo is a group of Host and Resources as returned by lxd 42 | type HostResourceInfo struct { 43 | Host *config.LXDhost 44 | Resources *api.Resources 45 | } 46 | 47 | // DiscardCloser is a WriteCloser that just discards data. When we exec commands on a container 48 | // stdout, etc need some place to go, but at the moment we don't care about the data. 49 | type DiscardCloser struct{} 50 | 51 | // Write just sends its data to the io.Discard object 52 | func (DiscardCloser) Write(b []byte) (int, error) { 53 | return io.Discard.Write(b) 54 | } 55 | 56 | // Close does nothing and is there just to satisfy the WriteCloser interface 57 | func (DiscardCloser) Close() error { 58 | return nil 59 | } 60 | 61 | // GetContainers asks for a list of containers from each LXD host, then optionally calls GetContainerState 62 | // on each container to populate state information (IP, CPU / Memory / Disk usage, etc) 63 | func GetContainers(host string, name string, getState bool) ([]ContainerInfo, error) { 64 | var containerInfo []ContainerInfo 65 | 66 | // Always try to loop over the config array of hosts so we maintain the same ordering 67 | for _, lxdh := range Conf.LXDhosts { 68 | if host == "" || lxdh.Host == host { 69 | conn, err := getConnection(lxdh.Host) 70 | if err != nil { 71 | log.Printf("Connection error to " + lxdh.Host + " : " + err.Error()) 72 | continue 73 | } 74 | 75 | // annoyingly this doesn't return all the state information we want too, so we just get a list of containers 76 | containers, err := conn.GetContainers() 77 | if err != nil { 78 | return containerInfo, err 79 | } 80 | 81 | // Take the list of containers we got back and put them into our array to return, this at least ensures 82 | // the ordering of hosts in the order they were specfied in the config which is something we want to be 83 | // consistent with 84 | for _, container := range containers { 85 | if name == "" || container.Name == name { 86 | // Prepopulate a blank state in case we can't fetch it later 87 | state := &api.ContainerState{} 88 | tmp := ContainerInfo{ 89 | Host: lxdh, 90 | Container: container, 91 | State: state, 92 | Usage: make(map[string]float64), 93 | } 94 | containerInfo = append(containerInfo, tmp) 95 | } 96 | } 97 | } 98 | } 99 | 100 | // If we want to fetch state, that more expensive as its a new call out for every container. We will loop 101 | // over our newly built array and make the call in a goroutine to at least parallelize that, or make it concurrent 102 | // or something of that nature, maybe this helps? 103 | if getState { 104 | done := make(chan int) 105 | start := time.Now() 106 | 107 | for idx, info := range containerInfo { 108 | go func(info ContainerInfo, idx int) { 109 | state, err := GetContainerState(info.Host.Host, info.Container.Name) 110 | if err != nil { 111 | log.Printf("Could not get container state from %v for %v", info.Host.Host, info.Container.Name) 112 | return 113 | } 114 | 115 | // Drop the state in our array and calculate the cpu usage so we don't have to muck with that later, still not sure its right 116 | containerInfo[idx].State = state 117 | containerInfo[idx].Usage["cpu"] = (float64(state.CPU.Usage/1000000000) / math.Abs(time.Now().Sub(info.Container.LastUsedAt).Seconds())) * 100 118 | done <- idx 119 | }(info, idx) 120 | } 121 | 122 | // going to allow the fetches 10s to complete, or stop when we determined we got them all because our completed count 123 | // is >= the total we have 124 | total := len(containerInfo) 125 | completed := 0 126 | now := time.Now() 127 | for completed < total && now.Sub(start).Seconds() < 10 { 128 | select { 129 | case <-done: 130 | completed++ 131 | default: 132 | break 133 | } 134 | now = time.Now() 135 | } 136 | } 137 | 138 | return containerInfo, nil 139 | } 140 | 141 | // GetContainerState calls out to our LXD host to get the state of the container. State has data like network info, 142 | // memory usage, cpu seconds in use, running processes etc 143 | func GetContainerState(host string, name string) (*api.ContainerState, error) { 144 | conn, err := getConnection(host) 145 | if err != nil { 146 | return nil, err 147 | } 148 | 149 | state, _, err := conn.GetContainerState(name) 150 | if err != nil { 151 | return nil, err 152 | } 153 | 154 | return state, nil 155 | } 156 | 157 | // GetImages calls each LXD host to get a list of images available on each 158 | func GetImages(host string) ([]ImageInfo, error) { 159 | var images []ImageInfo 160 | 161 | for _, lxdh := range Conf.LXDhosts { 162 | if host == "" || lxdh.Host == host { 163 | conn, err := getConnection(lxdh.Host) 164 | if err != nil { 165 | log.Printf("Connection error to " + lxdh.Host + " : " + err.Error()) 166 | continue 167 | } 168 | 169 | imgs, err := conn.GetImages() 170 | if err != nil { 171 | return images, err 172 | } 173 | 174 | for _, i := range imgs { 175 | tmp := ImageInfo{ 176 | Host: lxdh, 177 | Aliases: i.Aliases, 178 | Architecture: i.Architecture, 179 | Fingerprint: i.Fingerprint, 180 | } 181 | 182 | images = append(images, tmp) 183 | } 184 | } 185 | } 186 | 187 | return images, nil 188 | } 189 | 190 | // CreateContainer creates a container from the given image, with the provided name on the LXD host 191 | func CreateContainer(host string, name string, image string, storagepool string, options map[string]string) error { 192 | conn, err := getConnection(host) 193 | if err != nil { 194 | return err 195 | } 196 | 197 | // We are going to grab a list of containers first to make sure someone isn't trying to create a duplicate name. 198 | // Look at every host as we might want to move the container later, and you can't do that if there is already that 199 | // name on a host, so our list of managed hosts is like a fake cluster 200 | containerInfo, err := GetContainers("", "", false) 201 | if err != nil { 202 | return err 203 | } 204 | 205 | if len(containerInfo) > 0 { 206 | for _, c := range containerInfo { 207 | if c.Container.Name == name { 208 | return errors.New("container already exists on " + c.Host.Name) 209 | } 210 | } 211 | } 212 | 213 | // Normally I wouldn't want to just trust the frontend, but this is an internal thing so whatever 214 | put := api.ContainerPut{ 215 | Config: options, 216 | } 217 | 218 | if storagepool != "" && storagepool != "default" { 219 | // Storage pools are set via devices 220 | store := make(map[string]string) 221 | store["path"] = "/" 222 | store["pool"] = storagepool 223 | store["type"] = "disk" 224 | 225 | put.Devices = make(map[string]map[string]string) 226 | put.Devices["root"] = store 227 | } 228 | 229 | // Take the ContinerPut and initialize our Post, its inlined so just toss all the values in 230 | req := api.ContainersPost{ 231 | ContainerPut: put, 232 | Name: name, 233 | Source: api.ContainerSource{ 234 | Type: "image", 235 | Alias: image, 236 | }, 237 | InstanceType: "", // we just use the default which should be Persistent 238 | } 239 | 240 | // schedule the create with LXD, this happens in the background 241 | op, err := conn.CreateContainer(req) 242 | if err != nil { 243 | return err 244 | } 245 | 246 | // wait for the create to finish 247 | err = op.Wait() 248 | if err != nil { 249 | return err 250 | } 251 | 252 | return nil 253 | } 254 | 255 | // StartContainer starts a stopped container 256 | func StartContainer(host string, name string) error { 257 | conn, err := getConnection(host) 258 | if err != nil { 259 | return err 260 | } 261 | 262 | // Grab container info to make sure our container isn't already running 263 | containerInfo, err := GetContainers(host, name, false) 264 | if err != nil { 265 | return err 266 | } 267 | 268 | if len(containerInfo) > 0 { 269 | for _, c := range containerInfo { 270 | if c.Container.Name == name && c.Container.Status == "Running" { 271 | // our container is already running so bail 272 | return nil 273 | } 274 | 275 | // don't allow remote management of anything we have locked, check that we have a LastUsedAt > 0 276 | // which would mean that this container has booted at some point in the past. If it is 0 then 277 | // we just created it, so we want it to boot for the first time 278 | if !IsManageable(c) && c.Container.LastUsedAt.Unix() > 0 { 279 | return errors.New("lock flag set, remote management denied") 280 | } 281 | } 282 | } else { 283 | return errors.New("container does not exist") 284 | } 285 | 286 | reqState := api.ContainerStatePut{ 287 | Action: "start", 288 | Timeout: -1, 289 | } 290 | 291 | op, err := conn.UpdateContainerState(name, reqState, "") 292 | if err != nil { 293 | return err 294 | } 295 | 296 | // Like before the update is a background process, wait for it to finish 297 | err = op.Wait() 298 | if err != nil { 299 | return err 300 | } 301 | 302 | return nil 303 | } 304 | 305 | // StopContainer stops a running container 306 | func StopContainer(host string, name string) error { 307 | conn, err := getConnection(host) 308 | if err != nil { 309 | return err 310 | } 311 | 312 | // Grab container info to make sure our container is actually running 313 | containerInfo, err := GetContainers(host, name, false) 314 | if err != nil { 315 | return err 316 | } 317 | 318 | if len(containerInfo) > 0 { 319 | for _, c := range containerInfo { 320 | if c.Container.Name == name && c.Container.Status == "Stopped" { 321 | // our container is already stopped so bail 322 | return nil 323 | } 324 | 325 | // don't allow remote management of anything we have locked 326 | if !IsManageable(c) { 327 | return errors.New("lock flag set, remote management denied") 328 | } 329 | } 330 | } else { 331 | return errors.New("container does not exist") 332 | } 333 | 334 | reqState := api.ContainerStatePut{ 335 | Action: "stop", 336 | Timeout: -1, 337 | } 338 | 339 | op, err := conn.UpdateContainerState(name, reqState, "") 340 | if err != nil { 341 | return err 342 | } 343 | 344 | // Like before the update is a background process, wait for it to finish 345 | err = op.Wait() 346 | if err != nil { 347 | return err 348 | } 349 | 350 | return nil 351 | } 352 | 353 | // DeleteContainer removes a container from a host 354 | func DeleteContainer(host string, name string) error { 355 | conn, err := getConnection(host) 356 | if err != nil { 357 | return err 358 | } 359 | 360 | // Get container list to make sure we actually have a container with this name 361 | containerInfo, err := GetContainers(host, name, false) 362 | if err != nil { 363 | return err 364 | } 365 | 366 | if len(containerInfo) > 0 { 367 | for _, c := range containerInfo { 368 | // don't allow remote management of anything we have locked 369 | if !IsManageable(c) { 370 | return errors.New("lock flag set, remote management denied") 371 | } 372 | } 373 | } else { 374 | return errors.New("container does not exist") 375 | } 376 | 377 | op, err := conn.DeleteContainer(name) 378 | if err != nil { 379 | return err 380 | } 381 | 382 | // Like before the update is a background process, wait for it to finish 383 | err = op.Wait() 384 | if err != nil { 385 | return err 386 | } 387 | 388 | return nil 389 | } 390 | 391 | // GetHostResources grabs (the kind of limited) info about a host, available CPU cores, Memory, ... 392 | func GetHostResources(host string) (map[string]HostResourceInfo, error) { 393 | resourceHostMap := make(map[string]HostResourceInfo) 394 | 395 | for _, lxdh := range Conf.LXDhosts { 396 | if host == "" || lxdh.Host == host { 397 | resources := &api.Resources{} 398 | 399 | conn, err := getConnection(lxdh.Host) 400 | if err != nil { 401 | log.Printf("Connection error to " + lxdh.Host + " : " + err.Error()) 402 | } else { 403 | resources, err = conn.GetServerResources() 404 | if err != nil { 405 | return nil, err 406 | } 407 | } 408 | 409 | resourceHostMap[lxdh.Host] = HostResourceInfo{ 410 | Host: lxdh, 411 | Resources: resources, 412 | } 413 | } 414 | } 415 | 416 | return resourceHostMap, nil 417 | } 418 | 419 | // GetStoragePools gets a list of all the storage pools available for each host 420 | func GetStoragePools(host string) (map[string][]string, error) { 421 | storagePoolMap := make(map[string][]string) 422 | 423 | for _, lxdh := range Conf.LXDhosts { 424 | if host == "" || lxdh.Host == host { 425 | conn, err := getConnection(lxdh.Host) 426 | if err != nil { 427 | log.Printf("Connection error to " + lxdh.Host + " : " + err.Error()) 428 | continue 429 | } 430 | 431 | pools, err := conn.GetStoragePoolNames() 432 | if err != nil { 433 | log.Printf("Error getting pools from " + lxdh.Host + " : " + err.Error()) 434 | continue 435 | } 436 | 437 | storagePoolMap[lxdh.Host] = append(storagePoolMap[lxdh.Host], pools...) 438 | } 439 | } 440 | 441 | return storagePoolMap, nil 442 | } 443 | 444 | // CreateFile creates a file or directory on the container. If the provided path ends in / we assume 445 | // that we are creating a directory 446 | func CreateFile(host string, name string, path string, mode int, contents string) error { 447 | conn, err := getConnection(host) 448 | if err != nil { 449 | return err 450 | } 451 | 452 | filetype := "file" 453 | if strings.HasSuffix(path, "/") { 454 | filetype = "directory" 455 | } 456 | 457 | args := lxd.ContainerFileArgs{ 458 | Content: strings.NewReader(contents), 459 | Mode: mode, 460 | Type: filetype, 461 | } 462 | 463 | err = conn.CreateContainerFile(name, path, args) 464 | if err != nil { 465 | return err 466 | } 467 | 468 | return nil 469 | } 470 | 471 | // ExecCommand runs a command on the container and discards the output. As further comments state, 472 | // there doesn't seem to be an accurate return of success or not, need to look for a status code return. 473 | // If a way is found, likely will stop discarding output and return that to the UI. -1 is our return if 474 | // something outside the command went wrong 475 | func ExecCommand(host string, name string, command []string) (float64, error) { 476 | conn, err := getConnection(host) 477 | if err != nil { 478 | return -1, err 479 | } 480 | 481 | cmd := api.ContainerExecPost{ 482 | Command: command, 483 | WaitForWS: true, 484 | Interactive: false, 485 | } 486 | 487 | // We can't seem to get an accurate answer if the command executes or not, so 488 | // just going to toss the output until that changes 489 | var ignore DiscardCloser 490 | args := lxd.ContainerExecArgs{ 491 | Stdin: os.Stdin, 492 | Stdout: ignore, 493 | Stderr: ignore, 494 | } 495 | 496 | // schedule the command to execute 497 | op, err := conn.ExecContainer(name, cmd, &args) 498 | if err != nil { 499 | return -1, err 500 | } 501 | 502 | // wait for the command to finish 503 | err = op.Wait() 504 | if err != nil { 505 | return -1, err 506 | } 507 | 508 | // Get the status of the command and convert the return value to a number 509 | status := op.Get() 510 | statuscode, ok := status.Metadata["return"].(float64) 511 | if !ok { 512 | return -1, errors.New("failed to parse return value") 513 | } 514 | 515 | return statuscode, nil 516 | } 517 | 518 | // MoveContainer will move (copy in lxd speak) a container from one server to another. 519 | func MoveContainer(srcHost string, dstHost string, name string) error { 520 | // copy works by first marking the container as ready for migration, then connecting to the 521 | // destination and telling it to make a copy, then probably deleting from the source 522 | srcconn, err := getConnection(srcHost) 523 | if err != nil { 524 | return err 525 | } 526 | 527 | dstconn, err := getConnection(dstHost) 528 | if err != nil { 529 | return err 530 | } 531 | 532 | // Get container list to make sure we actually have a container with this name 533 | containerInfo, err := GetContainers(srcHost, name, false) 534 | if err != nil { 535 | return err 536 | } 537 | 538 | if len(containerInfo) > 0 { 539 | for _, c := range containerInfo { 540 | // don't allow remote management of anything we have locked 541 | if !IsManageable(c) { 542 | return errors.New("lock flag set, remote management denied") 543 | } 544 | } 545 | } else { 546 | return errors.New("container does not exist") 547 | } 548 | 549 | // set our migration status to true 550 | err = toggleMigration(srcconn, name, true) 551 | if err != nil { 552 | return err 553 | } 554 | 555 | // Now on the destination, try and copy it? 556 | c := api.Container{ 557 | Name: name, 558 | } 559 | args := &lxd.ContainerCopyArgs{ 560 | Live: true, 561 | } 562 | op, err := dstconn.CopyContainer(srcconn, c, args) 563 | if err != nil { 564 | err2 := toggleMigration(srcconn, name, false) 565 | if err2 != nil { 566 | return fmt.Errorf("Error copying container (%v) error while unmigrating container (%v)", err, err2) 567 | } 568 | return err 569 | } 570 | 571 | err = op.Wait() 572 | if err != nil { 573 | err2 := toggleMigration(srcconn, name, false) 574 | if err2 != nil { 575 | return fmt.Errorf("Error copying container (%v) error while unmigrating container (%v)", err, err2) 576 | } 577 | return err 578 | } 579 | 580 | // And finally remove the container from the src, if this fails we aren't going to try to rollback anything 581 | err = DeleteContainer(srcHost, name) 582 | return err 583 | } 584 | 585 | // toggleMigration is a helper for MoveContainer to toggle the migration flag on / off if 586 | // we want to move it, or then later run into an error and need to flip it back 587 | func toggleMigration(conn lxd.ContainerServer, name string, migrate bool) error { 588 | post := api.ContainerPost{ 589 | Migration: migrate, 590 | Live: migrate, 591 | } 592 | 593 | // like other commands, get the operation and then wait on it, just return here, later 594 | // if we hit an error we probably need to try to un-migrate the thing? 595 | op, err := conn.MigrateContainer(name, post) 596 | if err != nil { 597 | return err 598 | } 599 | 600 | // TODO : OK, so the reason this doesn't return is once you kick off the migrate you need to 601 | // Then make a copy request to run at the same time. In experimenting with this I keep 602 | // getting an error "Architecture isn't supported:" which I can't find any info about 603 | err = op.Wait() 604 | return err 605 | } 606 | 607 | // IsManageable just checks our lock flag, user.lxdepot_lock to see if it is "true" or not 608 | func IsManageable(c ContainerInfo) bool { 609 | // don't allow remote management of anything we have locked 610 | if c.Container.ExpandedConfig["user.lxdepot_lock"] == "true" { 611 | return false 612 | } 613 | 614 | return true 615 | } 616 | 617 | // getConnection will either return a cached connection, or reach out and make a new connection 618 | // to the host before caching that 619 | func getConnection(host string) (lxd.ContainerServer, error) { 620 | if conn, ok := lxdConnections[host]; ok { 621 | return conn, nil 622 | } 623 | 624 | var lxdh *config.LXDhost 625 | for _, h := range Conf.LXDhosts { 626 | if h.Host == host { 627 | lxdh = h 628 | } 629 | } 630 | 631 | if lxdh.Host == "" { 632 | log.Fatal("Could not find lxdhost [" + host + "] in config\n") 633 | } 634 | 635 | args := &lxd.ConnectionArgs{ 636 | TLSClientCert: Conf.Cert, 637 | TLSClientKey: Conf.Key, 638 | TLSServerCert: lxdh.Cert, 639 | } 640 | conn, err := lxd.ConnectLXD("https://"+lxdh.Host+":"+lxdh.Port, args) 641 | if err != nil { 642 | return conn, err 643 | } 644 | 645 | lxdConnections[host] = conn 646 | 647 | return conn, nil 648 | } 649 | -------------------------------------------------------------------------------- /internal/utils/convert.go: -------------------------------------------------------------------------------- 1 | // Package utils is meant to be a collection of functions that could be useful elsewhere 2 | package utils 3 | 4 | import "fmt" 5 | 6 | // MakeBytesMoreHuman takes in a uint64 value that is meant to be something in bytes, like 7 | // memory usage, disk usage, etc. It returns a string converted to and having the appropriate 8 | // suffix: GB, MB, KB, B 9 | func MakeBytesMoreHuman(bytes uint64) string { 10 | switch { 11 | case bytes >= 1000000000: 12 | return fmt.Sprintf("%v GB", bytes/1073741824) 13 | case bytes >= 1000000: 14 | return fmt.Sprintf("%v MB", bytes/1048576) 15 | case bytes >= 1000: 16 | return fmt.Sprintf("%v KB", bytes/1024) 17 | } 18 | 19 | return fmt.Sprintf("%v B", bytes) 20 | } 21 | 22 | // MakeIntBytesMoreHuman same as MakeBytesMoreHuman but takes in an int64, mostly because 23 | // the lxd client returns byte values in differenting bit sizes /shrug 24 | func MakeIntBytesMoreHuman(bytes int64) string { 25 | return MakeBytesMoreHuman(uint64(bytes)) 26 | } 27 | -------------------------------------------------------------------------------- /service-definitions/systemd/lxdepot.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=LXDepot 3 | After=network.target 4 | 5 | [Service] 6 | WorkingDirectory=/opt/lxdepot/ 7 | ExecStart=/opt/lxdepot/lxdepot 8 | Restart=always 9 | 10 | [Install] 11 | WantedBy=default.target 12 | -------------------------------------------------------------------------------- /web/static/css/main.css: -------------------------------------------------------------------------------- 1 | body { 2 | margin: 0px; 3 | font-family: sans-serif; 4 | } 5 | 6 | #page-container { 7 | display: grid; 8 | grid-template-areas: 9 | 'header header' 10 | 'menu content'; 11 | grid-template-columns: 200px 1fr; 12 | grid-template-rows: 40px 1fr; 13 | } 14 | 15 | #header { 16 | grid-area: header; 17 | background-color: #333333; 18 | color: #f5f5f5; 19 | line-height: 40px; 20 | vertical-align: middle; 21 | } 22 | 23 | #header a { 24 | text-decoration: none; 25 | color: #f5f5f5; 26 | } 27 | 28 | #content { 29 | grid-area: content; 30 | } 31 | 32 | #proj-name { 33 | font-size: 20px; 34 | padding: 10px; 35 | } 36 | 37 | #page-btn { 38 | position: absolute; 39 | top: 10px; 40 | right: 10px; 41 | } 42 | 43 | table { 44 | border-collapse: collapse; 45 | width: 100%; 46 | } 47 | 48 | thead { 49 | background-color: #888888; 50 | } 51 | 52 | th, 53 | td { 54 | text-align: left; 55 | padding: 8px; 56 | } 57 | 58 | tr:nth-child(even) { 59 | background-color: #ececec; 60 | } 61 | 62 | #menu { 63 | grid-area: menu; 64 | background-color: #d64b00; 65 | height: 100vh; 66 | } 67 | 68 | #menu ul { 69 | padding: 0; 70 | margin: 0; 71 | } 72 | 73 | #menu li { 74 | list-style: none; 75 | line-height: 40px; 76 | text-align: center; 77 | } 78 | 79 | #menu li:hover { 80 | background-color: #ff5900; 81 | cursor: pointer; 82 | } 83 | 84 | #menu li a { 85 | text-decoration: none; 86 | color: #000000; 87 | width: 100%; 88 | display: inline-block; 89 | border-bottom: #333333 1px solid; 90 | font-weight: bold; 91 | } 92 | 93 | #menu li a.active { 94 | background-color: #ff5900; 95 | } 96 | 97 | #panel-container { 98 | position: fixed; 99 | bottom: 0; 100 | width: 100%; 101 | } 102 | 103 | #panel-menu { 104 | background-color: dimgray; 105 | height: 20px; 106 | clear: both; 107 | } 108 | 109 | #panel-controls { 110 | float: right; 111 | /* keep the control away from the browsers resize area */ 112 | margin-right: 15px; 113 | cursor: pointer; 114 | /* stretch our controls to make them look nicer and stand out */ 115 | transform: scale(2, 1); 116 | /* this barely does anything, but its just enough that I like it more with it */ 117 | font-weight: bolder; 118 | } 119 | 120 | #panel { 121 | /*font-size: 30px;*/ 122 | color: white; 123 | background-color: black; 124 | height: 150px; 125 | overflow-y: scroll; 126 | display: none; 127 | } 128 | 129 | #panel .success { 130 | color: green; 131 | } 132 | 133 | #panel .error { 134 | color: red; 135 | } 136 | 137 | .field { 138 | margin: 5px 0px 10px 5px; 139 | } 140 | 141 | .containerRow { 142 | cursor: pointer; 143 | } 144 | 145 | .hostRow { 146 | cursor: pointer; 147 | } 148 | 149 | label { 150 | display: block; 151 | width: 150px; 152 | } 153 | 154 | input, 155 | select, 156 | button { 157 | font-family: inherit; 158 | font-size: 100%; 159 | width: 150px; 160 | margin: 0; 161 | box-sizing: border-box; 162 | } 163 | 164 | #deleteBtn { 165 | background-color: red; 166 | color: white; 167 | } 168 | 169 | .small { 170 | font-size: .85em; 171 | } 172 | 173 | .move-left { 174 | margin-left: -5px; 175 | } 176 | 177 | .quarter { 178 | width: 25%; 179 | } -------------------------------------------------------------------------------- /web/static/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neophenix/lxdepot/7f8d8f8780fe1073879eae1578d21d0ff426efdb/web/static/favicon.ico -------------------------------------------------------------------------------- /web/templates/404.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 |

404

3 | Could not find a route to the page you are looking for, try again 4 | {{end}} 5 | 6 | {{define "js"}} 7 | {{end}} 8 | 9 | 10 | {{define "pagebtn"}} 11 | {{end}} 12 | -------------------------------------------------------------------------------- /web/templates/base.tmpl: -------------------------------------------------------------------------------- 1 | {{define "base"}} 2 | 3 | 4 | 5 | 6 | 7 |
8 | 12 | 19 |
20 | {{template "content" .}} 21 |
22 | 23 |
24 |
25 |
26 |
27 |
28 | 29 | 170 | {{template "js" .}} 171 | 172 | {{end}} 173 | -------------------------------------------------------------------------------- /web/templates/container.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 | 3 | 4 | 5 | 6 | 7 | 8 | {{if ne (index .Conf.DNS.Options "zone") ""}} 9 | 10 | 11 | 12 | 13 | {{end}} 14 | 15 | 16 | 27 | 28 | 29 | 30 | {{if and 0 (ge (len .Conf.LXDhosts) 1) (ne (index .Container.Container.ExpandedConfig "user.lxdepot_lock") "true")}} 31 | 32 | 44 | {{else}} 45 | {{range .Conf.LXDhosts}} 46 | {{if eq .Host $.Container.Host.Host}} 47 | 48 | {{end}} 49 | {{end}} 50 | {{end}} 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 77 | 78 | {{if ne (index .Container.Container.ExpandedConfig "user.lxdepot_lock") "true"}} 79 | {{if eq .Container.Container.Status "Running"}} 80 | {{if .Playbooks}} 81 | 82 | 83 | 91 | 92 | {{end}} 93 | {{end}} 94 | {{end}} 95 | 96 |
Name{{.Container.Container.Name}}
FQDN{{.Container.Container.Name}}.{{index .Conf.DNS.Options "zone"}}
IP Address 17 | {{range $iface, $info := .Container.State.Network}} 18 | {{if (ne $iface "lo")}} 19 | {{range $info.Addresses}} 20 | {{if (eq .Family "inet")}} 21 | {{.Address}} ({{$iface}})
22 | {{end}} 23 | {{end}} 24 | {{end}} 25 | {{end}} 26 |
Host 33 | 42 | 43 | {{.Name}}
Image{{index .Container.Container.ContainerPut.Config "image.description"}}
CPU{{printf "%.02f" (index .Container.Usage "cpu")}}%%
Memory{{MakeIntBytesMoreHuman .Container.State.Memory.Usage}}
Status{{.Container.Container.Status}}
Last Boot 71 | {{if eq .Container.Container.LastUsedAt.Unix 0}} 72 | Never 73 | {{else}} 74 | {{.Container.Container.LastUsedAt}} 75 | {{end}} 76 |
Playbooks 84 | 89 | 90 |
97 | {{if ne (index .Container.Container.ExpandedConfig "user.lxdepot_lock") "true"}} 98 |
99 | {{if eq .Container.Container.Status "Stopped"}} 100 | 101 | {{end}} 102 | {{if eq .Container.Container.Status "Running"}} 103 | 104 | {{end}} 105 | 106 |
107 | {{end}} 108 | {{end}} 109 | 110 | {{define "js"}} 111 | 157 | {{end}} 158 | 159 | {{define "pagebtn"}} 160 | {{end}} 161 | -------------------------------------------------------------------------------- /web/templates/container_list.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | {{range .Containers}} 13 | 14 | 15 | 16 | 27 | 28 | 29 | 30 | 31 | {{end}} 32 | 33 |
HostNameIP AddressCPUMemoryStatus
{{.Host.Name}}{{.Container.Name}} 17 | {{range $iface, $info := .State.Network}} 18 | {{if (ne $iface "lo")}} 19 | {{range $info.Addresses}} 20 | {{if (eq .Family "inet")}} 21 | {{.Address}} ({{$iface}})
22 | {{end}} 23 | {{end}} 24 | {{end}} 25 | {{end}} 26 |
{{printf "%.02f" (index .Usage "cpu")}}%%{{MakeIntBytesMoreHuman .State.Memory.Usage}}{{.Container.Status}}
34 | {{end}} 35 | 36 | {{define "js"}} 37 | 57 | {{end}} 58 | 59 | 60 | {{define "pagebtn"}} 61 | 62 | {{end}} 63 | -------------------------------------------------------------------------------- /web/templates/container_new.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 |
3 | 4 | 5 | 6 | 12 | 13 | 14 | 15 | 16 | 23 | 24 | 25 | 26 | 27 | 31 | 32 | 33 | 34 | 35 | 39 | 40 | 41 | 42 | 43 | 46 | 47 | 48 | 49 | 50 | 57 | 58 | 59 | 60 | 61 | 64 | 65 |
7 | 8 | {{if ne (index .Conf.DNS.Options "zone") ""}} 9 | .{{index .Conf.DNS.Options "zone"}} 10 | {{end}} 11 |
17 | 22 |
28 | 30 |
36 | 38 |
44 | 45 |
51 | 52 | 56 |
62 | 63 |
66 | 67 |
68 | 69 |
70 |
71 | {{end}} 72 | 73 | {{define "js"}} 74 | 180 | {{end}} 181 | 182 | 183 | {{define "pagebtn"}} 184 | {{end}} 185 | -------------------------------------------------------------------------------- /web/templates/host_list.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | {{range .Conf.LXDhosts}} 11 | 12 | 13 | 14 | 15 | 16 | 17 | {{end}} 18 | 19 |
HostCPUsMemory Used / TotalContainers Running / Total
{{.Name}}{{(index $.HostResourceMap .Host).Resources.CPU.Total}}{{MakeBytesMoreHuman (index $.HostResourceMap .Host).Resources.Memory.Used}} / {{MakeBytesMoreHuman (index $.HostResourceMap .Host).Resources.Memory.Total}}{{index (index $.HostContainerInfo .Host) "running"}} / {{index (index $.HostContainerInfo .Host) "total"}}
20 | {{end}} 21 | 22 | {{define "js"}} 23 | 36 | {{end}} 37 | 38 | {{define "pagebtn"}} 39 | {{end}} 40 | -------------------------------------------------------------------------------- /web/templates/image_list.tmpl: -------------------------------------------------------------------------------- 1 | {{define "content"}} 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | {{range .Images}} 11 | 12 | 13 | 18 | 19 | 20 | 21 | {{end}} 22 | 23 |
HostAliasesArchFingerprint
{{.Host.Name}} 14 | {{range .Aliases}} 15 | {{.Name}} 16 | {{end}} 17 | {{.Architecture}}{{.Fingerprint}}
24 | {{end}} 25 | 26 | {{define "js"}} 27 | {{end}} 28 | 29 | 30 | {{define "pagebtn"}} 31 | {{end}} 32 | --------------------------------------------------------------------------------