├── 000-example └── proposal.md ├── 003-rbac └── proposal.md ├── 027-var-steps └── proposal.md ├── 029-across-step └── proposal.md ├── 031-set-pipeline-step └── proposal.md ├── 033-archiving-pipelines └── proposal.md ├── 034-instanced-pipelines └── proposal.md ├── 037-prototypes └── proposal.md ├── 038-resource-prototypes └── proposal.md ├── 039-var-sources └── proposal.md ├── 040-valid-identifiers └── proposal.md ├── 041-opa-integration └── proposal.md ├── 082-p2p-volume-streaming └── proposal.md ├── 139-pipeline-identity-tokens ├── img │ └── AWS-IDP.png └── proposal.md ├── DESIGN_PRINCIPLES.md ├── LICENSE.md └── README.md /000-example/proposal.md: -------------------------------------------------------------------------------- 1 | > This is a fairly open-ended template; feel free to remove any sections that 2 | > you can't really fill out just yet. 3 | 4 | # Summary 5 | 6 | > Summarize your change. 7 | 8 | 9 | # Motivation 10 | 11 | > Describe the problem that this change intends to solve. Don't get into how it 12 | > solves it yet. 13 | > 14 | > This can be brief, but if there is any additional context or any 15 | > empathy-building that you'd like to communicate, it can't hurt to include it 16 | > with plenty of detail and hypothetical examples. 17 | 18 | 19 | # Proposal 20 | 21 | > Describe your proposal. 22 | > 23 | > Things that can help: clearly defining terms, providing example content, 24 | > pseudocode, etc. 25 | > 26 | > Feel free to mention key implementation concerns. 27 | 28 | 29 | # Open Questions 30 | 31 | > Raise any concerns here for things you aren't sure about yet. 32 | 33 | 34 | # Answered Questions 35 | 36 | > If there were any major concerns that have already (or eventually, through 37 | > the RFC process) reached consensus, it can still help to include them along 38 | > with their resolution, if it's otherwise unclear. 39 | > 40 | > This can be especially useful for RFCs that have taken a long time and there 41 | > were some subtle yet important details to get right. 42 | > 43 | > This may very well be empty if the proposal is simple enough. 44 | 45 | 46 | # New Implications 47 | 48 | > What is the impact of this change, outside of the change itself? How might it 49 | > change peoples' workflows today, good or bad? 50 | -------------------------------------------------------------------------------- /003-rbac/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#6](https://github.com/concourse/rfcs/pull/6) 2 | * Concourse Issue: n/a 3 | 4 | # Summary 5 | 6 | This proposal outlines the beginnings of support for Role Based Access Control 7 | (or RBAC for short). 8 | 9 | 10 | # Motivation 11 | 12 | A number of users have been asking for ways to restrict the permissions given 13 | to various users in the system, so this will be an attempt to outline what 14 | these restrictions might look like, and how they might work. 15 | 16 | There are various GitHub issues around this whole problem space, namely: 17 | 18 | * [#1317](https://github.com/concourse/concourse/issues/1317) (the 'organizing' 19 | issue for RBAC) 20 | * [#769](https://github.com/concourse/concourse/issues/769) and 21 | [#970](https://github.com/concourse/concourse/issues/970) for exposing 22 | read-only access for a pipeline to another team 23 | 24 | The scope of RBAC is quite large and there are likely many different 25 | configurations that users will want. This proposal is just a start. 26 | 27 | 28 | # Proposal 29 | 30 | This proposal introduces the concept of a **role**. A role is a name associated 31 | to its authentication requirements. 32 | 33 | Each team will support the following roles: 34 | 35 | * `owner`: Has read/write access to the team's resources, and can make admin 36 | changes to the team itself (`set-team`, `rename-team`, etc). This is 37 | effectively the permissions granted to all team-members prior to this 38 | proposal, and all teams will have their existing configuration migrated to 39 | this role upon upgrading. 40 | 41 | * `member`: Has read/write access to the team's resources. 42 | 43 | * `viewer`: Has read-only access to the team's resources. 44 | 45 | For now, these roles are hard-coded into Concourse and are completely static. 46 | Dynamic role management is out-of-scope of this proposal. 47 | 48 | 49 | ## How are roles configured? 50 | 51 | Currently, each team has a single authentication configuration, configured via 52 | `fly set-team`. 53 | 54 | With this proposal, `fly set-team` will continue to function as-is with the 55 | difference that it will now be configuring the `owner` role. This is to remain 56 | backwards-compatible for users that do not need RBAC functionality. 57 | 58 | To configure other roles, a configuration file must be used and passed as the 59 | `--config` (or `-c`) flag instead. This file contains the configuration for all 60 | roles. 61 | 62 | For example: 63 | 64 | ```sh 65 | fly -t my-target set-team -n my-team -c config.yml 66 | ``` 67 | 68 | ...with a `config.yml` as follows: 69 | 70 | ```yaml 71 | roles: 72 | owner: 73 | local: 74 | users: ["some-admin"] 75 | member: 76 | github: 77 | users: ["my-github-login"] 78 | teams: ["my-org:my-github-team"] 79 | cf: 80 | users: ["myusername"] 81 | spaces: ["myorg:myspace"] 82 | viewer: 83 | local: 84 | users: ["read-only-user"] 85 | ``` 86 | 87 | 88 | ## How do roles get persisted in the database? 89 | 90 | The configuration for a team's authentication is currently stored in the 91 | database as a JSON object containing the Dex users and groups that are 92 | permitted access. For example, a team permitting the 'Developers' team in the 93 | `concourse` GitHub organization and `pivotal-jwinters` GitHub user would look 94 | something like this: 95 | 96 | ```json 97 | { 98 | "groups": ["github:concourse:Developers"], 99 | "users": ["github:pivotal-jwinters"] 100 | } 101 | ``` 102 | 103 | With roles, this configuration simply becomes nested under each role. 104 | 105 | For example, a configuration that preserves the above configuration for the 106 | `owner` role but has a `viewer` role allowing anyone in the `concourse` GitHub 107 | organization would look like this: 108 | 109 | ```json 110 | { 111 | "viewer": { 112 | "groups": ["github:concourse"], 113 | "users": [] 114 | }, 115 | "owner": { 116 | "groups": ["github:concourse"], 117 | "users": ["github:pivotal-jwinters"] 118 | } 119 | } 120 | ``` 121 | 122 | The `up` migration will move the existing configuration under a `"owner"` key 123 | for backwards-compatibility. To configure other roles or modify the owner role, 124 | this can be changed as normal by using `fly set-team` with a config file as 125 | described above. 126 | 127 | 128 | ## How are roles handled by the API? 129 | 130 | Currently, when a user logs in Concourse uses each team's auth config to 131 | determine whether the user has access to each team. These teams are then stored 132 | as a claim in the user's token, looking something like this: 133 | 134 | ```json 135 | { 136 | "teams": ["team1", "team2"] 137 | } 138 | ``` 139 | 140 | With this proposal, this approach changes slightly to instead check against the 141 | auth config of each team's *roles*. Any roles matching the user are then stored 142 | in the token associated to the team in a map, like so: 143 | 144 | ```json 145 | { 146 | "teams": { 147 | "team1": ["owner"], 148 | "team2": ["member", "viewer"] 149 | } 150 | } 151 | ``` 152 | 153 | Concourse's API endpoints will each be modified to require one of the three 154 | roles for the team. 155 | 156 | The proposed endpoint-to-role mapping is as follows: 157 | 158 | ```go 159 | var requiredRoles = map[string]string{ 160 | atc.SaveConfig: "member", 161 | atc.GetConfig: "viewer", 162 | atc.GetBuild: "viewer", 163 | atc.GetBuildPlan: "viewer", 164 | atc.CreateBuild: "member", 165 | atc.ListBuilds: "viewer", 166 | atc.BuildEvents: "viewer", 167 | atc.BuildResources: "viewer", 168 | atc.AbortBuild: "member", 169 | atc.GetBuildPreparation: "viewer", 170 | atc.GetJob: "viewer", 171 | atc.CreateJobBuild: "member", 172 | atc.ListAllJobs: "viewer", 173 | atc.ListJobs: "viewer", 174 | atc.ListJobBuilds: "viewer", 175 | atc.ListJobInputs: "viewer", 176 | atc.GetJobBuild: "viewer", 177 | atc.PauseJob: "member", 178 | atc.UnpauseJob: "member", 179 | atc.GetVersionsDB: "viewer", 180 | atc.JobBadge: "viewer", 181 | atc.MainJobBadge: "viewer", 182 | atc.ClearTaskCache: "member", 183 | atc.ListAllResources: "viewer", 184 | atc.ListResources: "viewer", 185 | atc.ListResourceTypes: "viewer", 186 | atc.GetResource: "viewer", 187 | atc.PauseResource: "member", 188 | atc.UnpauseResource: "member", 189 | atc.UnpinResource: "member", 190 | atc.CheckResource: "member", 191 | atc.CheckResourceWebHook: "member", 192 | atc.CheckResourceType: "member", 193 | atc.ListResourceVersions: "viewer", 194 | atc.GetResourceVersion: "viewer", 195 | atc.EnableResourceVersion: "member", 196 | atc.DisableResourceVersion: "member", 197 | atc.PinResourceVersion: "member", 198 | atc.ListBuildsWithVersionAsInput: "viewer", 199 | atc.ListBuildsWithVersionAsOutput: "viewer", 200 | atc.GetResourceCausality: "viewer", 201 | atc.ListAllPipelines: "viewer", 202 | atc.ListPipelines: "viewer", 203 | atc.GetPipeline: "viewer", 204 | atc.DeletePipeline: "member", 205 | atc.OrderPipelines: "member", 206 | atc.PausePipeline: "member", 207 | atc.UnpausePipeline: "member", 208 | atc.ExposePipeline: "member", 209 | atc.HidePipeline: "member", 210 | atc.RenamePipeline: "member", 211 | atc.ListPipelineBuilds: "viewer", 212 | atc.CreatePipelineBuild: "member", 213 | atc.PipelineBadge: "viewer", 214 | atc.RegisterWorker: "member", 215 | atc.LandWorker: "member", 216 | atc.RetireWorker: "member", 217 | atc.PruneWorker: "member", 218 | atc.HeartbeatWorker: "member", 219 | atc.ListWorkers: "viewer", 220 | atc.DeleteWorker: "member", 221 | atc.SetLogLevel: "member", 222 | atc.GetLogLevel: "viewer", 223 | atc.DownloadCLI: "viewer", 224 | atc.GetInfo: "viewer", 225 | atc.GetInfoCreds: "viewer", 226 | atc.ListContainers: "viewer", 227 | atc.GetContainer: "viewer", 228 | atc.HijackContainer: "member", 229 | atc.ListDestroyingContainers: "viewer", 230 | atc.ReportWorkerContainers: "member", 231 | atc.ListVolumes: "viewer", 232 | atc.ListDestroyingVolumes: "viewer", 233 | atc.ReportWorkerVolumes: "member", 234 | atc.ListTeams: "viewer", 235 | atc.SetTeam: "owner", 236 | atc.RenameTeam: "owner", 237 | atc.DestroyTeam: "owner", 238 | atc.ListTeamBuilds: "viewer", 239 | atc.SendInputToBuildPlan: "member", 240 | atc.ReadOutputFromBuildPlan: "member", 241 | } 242 | ``` 243 | -------------------------------------------------------------------------------- /027-var-steps/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#27](https://github.com/concourse/rfcs/pull/27) 2 | * Concourse Issue: [concourse/concourse#5815](https://github.com/concourse/concourse/issues/5815) 3 | 4 | # var steps + local var sources 5 | 6 | This proposal introduces two new step types - `load_var` and `get_var` - along 7 | with a new mechanism for builds to use a "local var source" at runtime. 8 | 9 | * The `load_var` step can be used to read a value from a file at runtime and 10 | use it as a local var in subsequent steps. 11 | 12 | * The `get_var` step can be used to fetch a var from a var source and trigger a 13 | new build when its value changes. 14 | 15 | Both steps save the var in the build's "local var source", accessed through the 16 | special var source name `.` - e.g. `((.:some-var))`. 17 | 18 | ## Motivation 19 | 20 | 1. The `load_var` step introduces a general mechanism for using a file's 21 | contents to parameterize later steps. As a result, resource (proto)type 22 | authors will no longer have to implement `*_file` forms of any of their 23 | parameters. 24 | 25 | 1. The `get_var` step introduces more flexible way to trigger and parameterize 26 | jobs without having to use a resource. 27 | 28 | * A `vault` var source type could be used to trigger a job when a 29 | credential's value changes, in addition to its normal use for var syntax. 30 | 31 | * A `time` var source type could be used to trigger jobs on independent 32 | intervals. 33 | 34 | By invoking the var source type with metadata about the job, the var source 35 | type can base its behavior on the job in question: 36 | 37 | * A `vault` var source can use the team and pipeline name to look for the var 38 | under scoped paths. 39 | 40 | * A `time` var source could use a hash of the job ID to produce a unique 41 | interval for each job. 42 | 43 | With the `time` var source producing a unique interval for each job, this 44 | will eliminate the "stampeding herd" problem caused by having many jobs 45 | downstream of a single `time` resource. 46 | 47 | This would in turn allow us to un-feature-flag the long-running ["global 48 | resource history" experiment][global-resources-issue], which allows 49 | Concourse to optimize equivalent resource definitions into a single history, 50 | requiring only one `check` interval to keep everything up to date, and 51 | lowering database disk usage. 52 | 53 | ## Proposal 54 | 55 | This proposal introduces a new "local var source", accessed through the special 56 | name `.`. This local var source contains vars that were set in a temporary 57 | local scope. 58 | 59 | Each build will have a single local scope which steps can use for setting and 60 | getting var values. 61 | 62 | This proposal introduces two new steps which set vars in the local scope: 63 | `load_var` and `get_var`. 64 | 65 | ### `load_var`: loading a var's value from a file at runtime 66 | 67 | The following example uses a task to generate a branch name and then 68 | parameterizes a `git` resource with it in order to push to the branch: 69 | 70 | ```yaml 71 | plan: 72 | - task: generate-branch-name 73 | outputs: [branch-name] 74 | - load_var: branch-name 75 | file: branch-name/name 76 | - put: booklit 77 | params: 78 | branch: ((.:branch-name)) 79 | base: master 80 | ``` 81 | 82 | In this case, the `name` file will have contained a simple string value. 83 | 84 | With secret redaction enabled, the value for `((.:branch-name))` will be 85 | redacted by default. 86 | 87 | In this case the value isn't sensitive, and it's probably more helpful to allow 88 | it to be printed. To disable redaction for the var's value, configure 89 | `sensitive: false`: 90 | 91 | ```yaml 92 | load_var: branch-name 93 | file: branch-name/name 94 | sensitive: false 95 | ``` 96 | 97 | If a filename ending in `.yml`, `.yaml`, or `.json` is referenced, its value 98 | will automatically be parsed and used as fields for the var. Let's tweak the 99 | example to demonstrate: 100 | 101 | ```yaml 102 | plan: 103 | - task: generate-branch-name 104 | outputs: [branch-info] 105 | - load_var: branch 106 | file: branch-info/info.json # {"name":"foo"} 107 | - put: booklit 108 | params: 109 | branch: ((.:branch.name)) 110 | base: master 111 | ``` 112 | 113 | In some cases you might not want the value to be parsed as fields for the var. 114 | For example, GCP credentials are JSON format but almost always handed around 115 | verbatim. The automatic parsing can be disabled by explicitly setting `format: 116 | raw`: 117 | 118 | ```yaml 119 | load_var: gcp-key 120 | file: super-secret/gcp.json 121 | format: raw 122 | ``` 123 | 124 | Similarly, you may have a file with no file extension but still want to parse 125 | it as YAML or JSON. This can be done by explicitly configuring the format: 126 | 127 | ```yaml 128 | load_var: branch 129 | file: branch-info/info 130 | format: json 131 | ``` 132 | 133 | ### `get_var`: triggering on changes from a var source 134 | 135 | The following example uses a `time` var source type to periodically trigger a 136 | job: 137 | 138 | ```yaml 139 | var_sources: 140 | - name: test-interval 141 | type: time 142 | config: 143 | interval: 10m 144 | 145 | jobs: 146 | - name: trigger-over-time 147 | plan: 148 | # trigger on changes to `((test-interval:time))` 149 | - get_var: time 150 | source: test-interval 151 | trigger: true 152 | ``` 153 | 154 | The following example uses a `vault` var source type to trigger a job whenever 155 | a credential rotates: 156 | 157 | ```yaml 158 | var_sources: 159 | - name: my-vault 160 | type: vault 161 | config: 162 | url: https://vault.example.com 163 | ca_cert: # ... 164 | client_cert: # ... 165 | client_key: # ... 166 | 167 | jobs: 168 | - name: trigger-on-credential-change 169 | plan: 170 | # trigger on changes to ((my-vault:cert)) 171 | - get_var: cert 172 | source: my-vault 173 | trigger: true 174 | - put: deployment 175 | params: 176 | ca_cert: ((.:cert)) 177 | 178 | # trigger on changes to ((my-vault:cert/foo/bar)) 179 | - get_var: cert/foo/bar 180 | source: my-vault 181 | trigger: true 182 | - put: deployment 183 | params: 184 | ca_cert: ((.:cert/foo/bar)) 185 | ``` 186 | 187 | Build scheduling invokes the var source with a `get` request against an object, 188 | interpreting the response object as the var's values. If the value is different 189 | from the last value used, a new build is triggered. This comparison can be 190 | based on a hash so we don't have to store sensitive credential values. 191 | 192 | A `time` var source's input object might look something like this: 193 | 194 | ```json 195 | { 196 | "var": "time", 197 | "interval": "10m", 198 | "team": "some-team", 199 | "pipeline": "some-pipeline", 200 | "job": "some-job" 201 | } 202 | ``` 203 | 204 | Note the addition of `team`, `pipeline`, and `job` - this will be automated by 205 | Concourse. (TODO: The format and contents of these values is something we 206 | should probably put more thought into; we may want it to match the 207 | [notifications RFC][notifications-rfc].) 208 | 209 | And the response might look something like this: 210 | 211 | ```json 212 | { 213 | "iso8601": "2020-01-18T23:09:00-05:00", 214 | "unix": 1579406940 215 | } 216 | ``` 217 | 218 | This response would then be loaded into the build's local var source, available 219 | as `((.:time))`, and with its fields read as e.g. `((.:time.iso8601))` or 220 | `((.:time.unix))`. 221 | 222 | ## Open Questions 223 | 224 | * n/a 225 | 226 | ## Answered Questions 227 | 228 | * n/a 229 | 230 | ## New Implications 231 | 232 | 1. A separate RFC could be written so that `get` steps can also provide a local 233 | var containing the object returned by the resource. This could be used 234 | instead of writing values to files. 235 | 236 | [resources-rfc]: https://github.com/vito/rfcs/blob/resource-prototypes/038-resource-prototypes/proposal.md 237 | [global-resources-issue]: https://github.com/concourse/concourse/issues/2386 238 | [notifications-rfc]: https://github.com/concourse/rfcs/pull/28 239 | -------------------------------------------------------------------------------- /029-across-step/proposal.md: -------------------------------------------------------------------------------- 1 | # `across` step 2 | 3 | This proposal introduces a mechanism by which a build plan can be executed 4 | multiple times with different `((vars))`, in service of build matrixes, pipeline 5 | matrixes, and - building on [`((var))` sources][var-sources-rfc] - dynamic 6 | variations of both. 7 | 8 | 9 | ## Motivation 10 | 11 | * Support dynamic multi-branch workflows: 12 | [concourse/concourse#1172][multi-branch-issue] 13 | * Support build matrixes and pipeline matrixes. 14 | * Replace a common, and very flawed, usage pattern of `version: every`, which 15 | has proven to be very complicated to support. 16 | 17 | 18 | ## Proposal 19 | 20 | The `across` step modifier is given a list containing vars and associated 21 | values to set when executing the step. The step will be executed across all 22 | combinations of var values. 23 | 24 | ### Static var values 25 | 26 | The set of values can be static, as below: 27 | 28 | ```yaml 29 | task: unit 30 | vars: {go_version: ((.:go_version))} 31 | across: 32 | - var: go_version 33 | values: [1.12, 1.13] 34 | ``` 35 | 36 | This will run the `unit` task twice: once with `go_version` set to `1.12`, and 37 | again with `1.13`. 38 | 39 | ### Dynamic var values from a var source 40 | 41 | Rather than static values, var values may be pulled from a var source 42 | dynamically: 43 | 44 | ```yaml 45 | var_sources: 46 | - name: booklit-prs 47 | type: github-prs 48 | config: 49 | repository: vito/booklit 50 | access_token: # ... 51 | 52 | plan: 53 | - set_pipeline: pr 54 | instance_vars: {pr_number: ((.:pr.number))} 55 | across: 56 | - var: pr 57 | source: booklit-prs 58 | ``` 59 | 60 | The above example will run the `set_pipeline` step across the set of all GitHub 61 | PRs, returned through a `list` operation on the `booklit-prs` var source. 62 | 63 | ### Running across a matrix of var values 64 | 65 | Multiple vars may be listed to form a matrix: 66 | 67 | ```yaml 68 | set_pipeline: pr-go 69 | instance_vars: 70 | pr_number: ((.:pr.number)) 71 | go_version: ((.:go_version)) 72 | across: 73 | - var: pr 74 | source: booklit-prs 75 | - var: go_version 76 | values: [1.12, 1.13] 77 | ``` 78 | 79 | This will run 2 * (# of PRs) `set_pipeline` steps, setting two pipelines per 80 | PR: one for Go 1.12, and one for Go 1.13. 81 | 82 | ### Controlling parallelism with `max_in_flight` 83 | 84 | By default, the steps are executed serially to prevent surprising load 85 | increases from a dynamic var source suddenly returning a ton of values. 86 | 87 | To run steps in parallel, a `max_in_flight` must be specified as either `all` 88 | or a number - its default is `1`. Note: this value is specified on each `var`, 89 | rather than the entire step. 90 | 91 | With `max_in_flight: all`, no limit on parallelism will be enforced. This would 92 | be typical for when a small, static set of values is specified, and it would be 93 | annoying to keep the number in sync with the set: 94 | 95 | ```yaml 96 | task: unit 97 | vars: {go-version: ((.:go-version))} 98 | across: 99 | - var: go-version 100 | values: [1.12, 1.13] 101 | max_in_flight: all 102 | ``` 103 | 104 | With `max_in_flight: 3`, a maximum of 3 var values would be executed in 105 | parallel. This would be typically set for values coming from a var source, 106 | which may change at any time, or especially large static values. 107 | 108 | ```yaml 109 | set_pipeline: pr 110 | instance_vars: {pr_number: ((.:pr.number))} 111 | across: 112 | - var: pr 113 | source: booklit-prs 114 | max_in_flight: 3 115 | ``` 116 | 117 | When multiple `max_in_flight` values are configured, they are multiplicative, 118 | building on the concurrency of previously listed vars: 119 | 120 | ```yaml 121 | set_pipeline: pr 122 | instance_vars: 123 | pr_number: ((.:pr.number)) 124 | go_version: ((.:go_version)) 125 | across: 126 | - var: pr 127 | source: booklit-prs 128 | max_in_flight: 3 129 | - var: go_version 130 | values: [1.12, 1.13] 131 | max_in_flight: all 132 | ``` 133 | 134 | This will run 6 `set_pipeline` steps at a time, focusing on 3 PRs and setting 135 | Go 1.12 and Go 1.13 pipelines for each in parallel. 136 | 137 | Note that setting a `max_in_flight` on a single `var` while leaving the rest as 138 | their default (`1`) effectively sets an overall max-in-flight. 139 | 140 | ### Triggering on changes 141 | 142 | With `trigger: true` configured on a var, the build plan will run on any change 143 | to the set of vars - i.e. when a var value is added, removed, or changes. 144 | 145 | ```yaml 146 | var_sources: 147 | - name: booklit-prs 148 | type: github-prs 149 | config: 150 | repository: vito/booklit 151 | access_token: # ... 152 | 153 | plan: 154 | - set_pipeline: pr 155 | instance_vars: {pr_number: ((.:pr.number))} 156 | across: 157 | - var: pr 158 | source: booklit-prs 159 | trigger: true 160 | ``` 161 | 162 | Note that this can be applied to either static `values:` or dynamic vars from 163 | `source:` - both cases just boil down to a comparison against the previous 164 | build's set of values. 165 | 166 | ### Modifier syntax precedence 167 | 168 | The `across` step is a *modifier*, meaning it is attached to another step. 169 | Other examples of modifiers are `timeout`, `attempts`, `ensure`, and the `on_*` 170 | family of hooks. 171 | 172 | In terms of precedence, `across` would bind more tightly than `ensure` and 173 | `on_*` hooks, but less tightly than `across` and `timeout`. 174 | 175 | `ensure` and `on_*` bind to the `across` step so that they may be run after the 176 | full matrix completes. 177 | 178 | `attempts` binds to the inner step because it doesn't seem to make a whole lot 179 | of sense to retry the *entire matrix* because one step failed. The individual 180 | steps should be retried instead. 181 | 182 | `timeout` binds to the inner step because otherwise a `max_in_flight` could 183 | cause the timeout to be exceeded before some steps even get a chance to run. 184 | 185 | ```yaml 186 | task: unit 187 | timeout: 1h # interrupt the task after 1 hour 188 | attempts: 3 # attempt the task 3 times 189 | across: 190 | - var: go_version 191 | values: [1.12, 1.13] 192 | on_failure: # do something after all steps complete and at least one failed 193 | ``` 194 | 195 | To apply `ensure` and `on_*` hooks to the nested step, rather than the `across` 196 | step modifier, the `do:` step may be utilized: 197 | 198 | ```yaml 199 | do: 200 | - task: unit 201 | on_failure: # runs after each individual step completes and fails 202 | across: 203 | - var: go_version 204 | values: [1.12, 1.13] 205 | on_failure: # runs after all steps complete and at least one failed 206 | ``` 207 | 208 | This can be rewritten in a slightly more readable syntax by placing the `do:` 209 | below the `across:`: 210 | 211 | ```yaml 212 | across: 213 | - var: go_version 214 | values: [1.12, 1.13] 215 | do: 216 | - task: unit 217 | on_failure: # runs after each individual step completes and fails 218 | on_failure: # runs after all steps complete and at least one failed 219 | ``` 220 | 221 | ### Failing fast 222 | 223 | When a step within the matrix fails, the `across` step will continue to run the 224 | remaining steps in the matrix. However, the `across` step itself will still 225 | fail after all steps have completed. 226 | 227 | With `fail_fast: true` applied to the `across` step, execution will halt on the 228 | first failure, and any currently running steps (via `max_in_flight`) will be 229 | interrupted: 230 | 231 | ```yaml 232 | task: unit 233 | across: 234 | - var: go_version 235 | values: [1.12, 1.13] 236 | fail_fast: true 237 | ``` 238 | 239 | Note: this is the first time a step *modifier* has had additional sibling 240 | fields. In the event of a field conflict, the `do:` step may be utilized as a 241 | work-around. 242 | 243 | ### Var scoping and shadowing 244 | 245 | The inner step will be run with a local var scope that inherits from the outer 246 | scope and is initialized with the across step's var values. 247 | 248 | Given that it runs with its own scope, it follows that the vars in the local 249 | scope, along with any vars newly set within the scope, are not accessible 250 | outside of the `across` step. 251 | 252 | When a var set by the `across` step shadows outer vars, a warning will be 253 | printed. 254 | 255 | Example: 256 | 257 | ```yaml 258 | plan: 259 | - get_var: foo 260 | trigger: true 261 | - run: print 262 | type: debug 263 | params: {value: ((.:foo))} 264 | - across: 265 | - var: foo 266 | values: [one, two] 267 | do: 268 | - run: print 269 | type: debug 270 | params: {value: ((.:foo))} 271 | - run: print 272 | type: debug 273 | params: {value: ((.:foo))} 274 | ``` 275 | 276 | Assuming the `get_var` step produces a value of `zero`, this build plan should 277 | result in the following output: 278 | 279 | ``` 280 | zero 281 | WARNING: across step shadows local var 'foo' 282 | one 283 | two 284 | zero 285 | ``` 286 | 287 | 288 | ## Open Questions 289 | 290 | * n/a 291 | 292 | 293 | ## New Implications 294 | 295 | * Using `across` with var sources implies the addition of a `list` action for 296 | listing the vars from a var source. We could build on this to show the list 297 | of available vars in the UI, which would really help with troubleshooting 298 | credential manager access and knowing what vars are available. 299 | 300 | Obviously we wouldn't want to show credential values, so `list` should only 301 | include safe things like credential paths. 302 | 303 | * Combining the [`set_pipeline` step][set-pipeline-rfc], [`((var))` 304 | sources][var-sources-rfc], [instanced pipelines][instanced-pipelines-rfc], 305 | [pipeline archiving][pipeline-archiving-rfc], and the `across` step can be 306 | used for end-to-end automation of pipelines for branches and pull requests: 307 | 308 | ```yaml 309 | set_pipeline: pr 310 | instance_vars: {pr_number: ((.:pr.number))} 311 | across: 312 | - var: pr 313 | source: prs 314 | trigger: true 315 | ``` 316 | 317 | [set-pipeline-rfc]: https://github.com/concourse/rfcs/pull/31 318 | [instanced-pipelines-rfc]: https://github.com/concourse/rfcs/pull/34 319 | [pipeline-archiving-rfc]: https://github.com/concourse/rfcs/pull/33 320 | [var-sources-rfc]: https://github.com/concourse/rfcs/pull/39 321 | [multi-branch-issue]: https://github.com/concourse/concourse/issues/1172 322 | -------------------------------------------------------------------------------- /031-set-pipeline-step/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#31](https://github.com/concourse/rfcs/pull/31) 2 | * Concourse Issue: [concourse/concourse#5814](https://github.com/concourse/concourse/issues/5814) 3 | 4 | # Summary 5 | 6 | This RFC proposes a new `set_pipeline` step type for configuring pipelines within a build plan. 7 | 8 | 9 | # Motivation 10 | 11 | ## Short-term motivation 12 | 13 | Lots of folks are already using the [`concourse-pipeline` resource](https://github.com/concourse/concourse-pipeline-resource), however the resource has two fatal flaws: 14 | 15 | * Users have to configure a local auth user and pass it to the resource definition. 16 | * The resource is versioned independently of the user's Concourse, meaning the `fly` version won't always be in sync. The resource makes an attempt to resolve this by doing a `sync` after logging in, but this is a pretty clunky problem regardless. 17 | 18 | If we had native support for a `set_pipeline` step, both of these problems would go away. 19 | 20 | ## Long-term motivation 21 | 22 | By having a `set_pipeline` step in the build plan, we can start to improve Concourse's story around automating the full CI stack for projects of all sizes. Users can start to trust that pipelines are always configured via CI, and they can go over the build history to see who changed what and when. 23 | 24 | Later RFCs (namely, 'projects' and 'instanced pipelines') will build on this idea to provide a truly continuous workflow for automating pipelines - including their automatic archival when they're no longer needed. 25 | 26 | 27 | # Proposal 28 | 29 | Using the step would look something like this: 30 | 31 | ```yaml 32 | plan: 33 | - get: ci 34 | - set_pipeline: concourse 35 | file: ci/pipelines/concourse.yml 36 | ``` 37 | 38 | The `x` in `set_pipeline: x` is the pipeline name, and `file:` would be used to specify the pipeline config. 39 | 40 | The pipeline would be configured within whichever team the build execution belongs to. 41 | 42 | Upon first configuration the pipeline will be automatically unpaused, as opposed to `fly set-pipeline` which puts newly configured pipelines in paused state by default. The assumption here is that if you're automating `set_pipeline` you're not just kicking the tires and can probably trust the pipelines that you're configuring are correct, at least enough to have made it into version control. 43 | 44 | When configuring an existing pipeline, however, the pipeline's paused status will not be changed. In other words, the `set_pipeline` step will leave already-existing paused pipelines in the paused state. The assumption here is that the pipeline has been manually paused by a pipeline operator, possibly in response to an emergent situation, and it should be left alone. 45 | 46 | ## `((vars))` support 47 | 48 | Additionally, we should support `vars` (as in `fly set-pipeline -y`) and `var_files` (i.e. `fly set-pipeline -l`): 49 | 50 | ```yaml 51 | plan: 52 | - get: ci 53 | - set_pipeline: release 54 | file: ci/pipelines/release.yml 55 | vars: {release_version: 5.3} 56 | var_files: 57 | - ci/pipelines/vars/foo.yml 58 | ``` 59 | 60 | ## Preventing manual updates 61 | 62 | When using `fly set-pipeline` to update a pipeline that has been configured 63 | through the `set_pipeline` step, a warning will be printed and a confirmation 64 | dialogue will be presented. 65 | 66 | When configured through `fly set-pipeline` thereafter, warnings will no 67 | longer be issued. 68 | 69 | This is to prevent accidentally configuring changes that will be blown away, 70 | while still allowing pipeline operators to take over its configuration if 71 | needed. 72 | 73 | 74 | # Experiments 75 | 76 | There are a few extended pieces of functionality that have been proposed. There 77 | is currently no consensus on these being the ideal long-term design, because 78 | there are alternative methods we're planning that should make them unnecessary. 79 | 80 | However, there is value in supporting them "until we get there." We can 81 | implement support for them, and include a warning both in their usage and in 82 | the documentation that they may be removed in the future. 83 | 84 | Each experiment must have an easy-to-find GitHub Discussion so that we can 85 | collect feedback on how the feature is used and confirm that the long-term 86 | design addresses the core need appropriately. 87 | 88 | ## `set_pipeline: self` 89 | 90 | * PR: [#4857](https://github.com/concourse/concourse/pull/4857) 91 | * Feedback: [#5732](https://github.com/concourse/concourse/discussions/5732) 92 | 93 | Currently, the `foo` in `set_pipeline: foo` is the name of a pipeline to set. A 94 | pipeline could technically update itself by configuring its own name in the 95 | step, but pipeline configs aren't meant to contain their own name, as doing so 96 | prevents the config from being re-used as a 'pipeline template'. You could of 97 | course turn this into a var, but that's a little clunky to use. 98 | 99 | To support self-updating pipelines without making them self-aware, we can allow 100 | the keyword `self` to mean the current pipeline. There is precedent for such a 101 | keyword in other fields like `version: every`, `version: latest`, `inputs: 102 | all`, and `inputs: detect`. 103 | 104 | One downside of this approach is it doesn't cover the full lifecycle of the 105 | pipeline: who set it initially, so that the `set_pipeline: self` step can even 106 | run? 107 | 108 | This is a question that will likely be answered by the [Projects 109 | concept][projects-rfc] once it's introduced. Projects are designed to be the 110 | authoritative source for pipeline configuration, covering both the initial 111 | creation and the later updating of all pipelines contained therein. 112 | 113 | As such, it will be a little odd to support both `set_pipeline: self` and 114 | Projects side-by-side. But until Projects lands, there is benefit in allowing 115 | it so that we can confirm that Projects covers all the use cases for it by 116 | analyzing user feedback. 117 | 118 | ## Setting pipelines in other teams 119 | 120 | * PR: [#5729](https://github.com/concourse/concourse/pull/5729) 121 | * Feedback: [#5731](https://github.com/concourse/concourse/discussions/5731) 122 | 123 | The `set_pipeline` step is designed to be a "piece of the puzzle" - just like 124 | other steps like `get`, `put`, and `task`. 125 | 126 | It is designed to operate against *one* pipeline, in the current team, and in 127 | the current Concourse cluster. This is in contrast to the 128 | [`concourse-pipeline` resource][concourse-pipeline-resource], which supports 129 | setting *many* pipelines across *many* teams within *any* Concourse cluster. 130 | 131 | This step is not intended to be a drop-in replacement for the 132 | `concourse-pipeline` resource, but it *is* a goal to deprecate it. However 133 | full deprecation is blocked on further development around the [Projects 134 | concept][projects-rfc] or other ideas that lead towards auto-configuring the 135 | full Concourse cluster. 136 | 137 | The `concourse-pipeline` resource provides significant enough burden to 138 | maintainers and users that it is probably wise to expedite its deprecation 139 | without waiting on these farther-off goals. To this end, we can 140 | experimentally support setting pipelines in other teams by configuring a 141 | `team:` field on the step: 142 | 143 | ```yml 144 | set_pipeline: foo 145 | team: bar 146 | file: ci/foo.yml 147 | ``` 148 | 149 | This must only work if the step is being run by an admin team (i.e. `main`), 150 | making its usage somewhat limited. Once a more suitable replacement arrives 151 | this field can be removed. 152 | 153 | 154 | # Open Questions 155 | 156 | n/a 157 | 158 | 159 | # Answered Questions 160 | 161 | * > Should we support glob expansion in `var_files`? 162 | > 163 | > The `concourse-pipeline` resource supports this by just performing glob 164 | > expansion against its local filesystem. For the `set_pipeline` step, this is 165 | > a bit more challenging - there *is* no local filesystem. Would we have to 166 | > implement glob expansion in the Baggageclaim API or something? How easily 167 | > would this translate to other runtimes? 168 | 169 | This is a question we'll probably have to answer for various different 170 | steps, so it should probably be addressed outside of this RFC. 171 | 172 | 173 | # New Implications 174 | 175 | ## Deprecating `concourse-pipeline` resource 176 | 177 | Deprecating the `concourse-pipeline` resource should be the primary goal. 178 | 179 | Some of the extended functionality of the resource will not be supported in the name of keeping the `set_pipeline` step design simple and easy to reason about. 180 | 181 | For example, the step should only ever configure one pipeline at a time - it should not support the `pipelines:` functionality for configuring a bunch at once. 182 | 183 | Similarly, the step should not support fully dynamic configuration (`pipelines_file:`). 184 | 185 | 186 | [concourse-pipeline-resource]: https://github.com/concourse/concourse-pipeline-resource 187 | [projects-rfc]: https://github.com/concourse/rfcs/pull/32 188 | -------------------------------------------------------------------------------- /033-archiving-pipelines/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#33](https://github.com/concourse/rfcs/pull/33) 2 | * Concourse Issue: [concourse/concourse#5807](https://github.com/concourse/concourse/issues/5807) 3 | 4 | # Summary 5 | 6 | This proposal outlines a relatively straightforward 'archiving' operation which can be used to soft-delete a pipeline while preserving its data for later perusal. 7 | 8 | # Proposal 9 | 10 | Pipelines can be archived by a user through the `fly` CLI: 11 | 12 | ```sh 13 | $ fly -t ci archive-pipeline -p pipeline-name 14 | pipeline 'pipeline-name' archived 15 | ``` 16 | 17 | Archived pipelines inherit the behavior of paused pipelines, with a few differences outlined below. As with paused pipelines, no resource checking or job scheduling is performed. Build logs are kept, but remain subject to the configured build log retention policy. 18 | 19 | Archived pipelines remain viewable in the web UI and `fly`, but they are grouped into a separate section, hidden by default. 20 | 21 | Unlike paused pipelines, archived pipelines will have their configuration stripped out so that sensitive information isn't stored forever. 22 | 23 | Unlike paused pipelines, new builds cannot be created for archived pipelines. This is outlined in the API differences below, and enforced by the removal of their configuration. 24 | 25 | Archived pipeline names exist in the same namespace as unarchived pipelines. Configuring a new pipeline with the same name as an archived pipeline un-archives the pipeline and gives it a new configuration. See [Un-archiving](#un-archiving). 26 | 27 | ## API implications 28 | 29 | Archived pipelines become read-only, to some extent. API operations that occur within the pipeline, such as triggering jobs and pinning/un-pinning resources, will be rejected. API operations to the pipeline itself behave as follows: 30 | 31 | * All pipeline API objects will include a new field, `"archived": true/false`. 32 | * `SaveConfig` (i.e. `fly set-pipeline`, `set_pipeline` step) will work. See [Un-archiving](#un-archiving). 33 | * `ListAllPipelines`, `ListPipelines`, and `GetPipeline` will continue to return archived pipelines. Hiding archived pipelines is the job of the web UI and `fly`, not the API. 34 | * `DeletePipeline` will work, so that an archived pipeline can be removed for good. 35 | * `OrderPipelines` will work. The ordering of a pipeline is pretty unimpactful and users may want to order their archived pipelines too. 36 | * `PausePipeline` will reject the request; archived pipelines are permanently paused. 37 | * `UnpausePipeline` will reject the request; archived pipelines are permanently paused. 38 | * `ExposePipeline` and `HidePipeline` will still work. 39 | * `RenamePipeline` will work; this way an archived pipeline can be named something else so that its original name can be used for unrelated pipelines. 40 | * `GetConfig` will 404; when a pipeline is archived its config is removed to avoid leaking sensitive information. Any other read operation scoped to the pipeline should work. 41 | * `CreateJobBuild` will error; archived pipelines must consume no scheduling resources, not even the build starter. 42 | * `CheckResource` and `CheckResourceType` will error; archived pipelines must consume no scheduling resources, including queuing checks. 43 | * `PinResourceVersion`, `UnpinResource`, `EnableResourceVersion`, `DisableResourceVersion`, and `SetPinCommentOnResource` will error; archived pipelines are read-only. Assume that pins, comments, and enabled/disabled versions persist when a pipeline is unarchived. 44 | 45 | ## Automatic archiving 46 | 47 | Pipelines configured with the [`set_pipeline` step](https://github.com/concourse/rfcs/pull/31) gain additional semantics with regard to archiving. 48 | 49 | Given a job with the following build plan: 50 | 51 | ```yaml 52 | plan: 53 | - set_pipeline: master 54 | - set_pipeline: release-5.0.x 55 | ``` 56 | 57 | When this runs, two pipelines will be created/updated: `master` and `release-5.0.x`. 58 | 59 | If a `set_pipeline` step is removed, like so...: 60 | 61 | ```yaml 62 | plan: 63 | - set_pipeline: master 64 | ``` 65 | 66 | When this runs and the build completes, Concourse will notice that `release-5.0.x` is no longer configured and automatically archive it. 67 | 68 | This can be done by keeping track of which job created a pipeline, and which pipelines were produced by each build. When a build completes, Concourse will compare the set of pipelines produced by the build to the set of pipelines associated to its job overall and archive any pipelines not present in the build's set. Alternatively the archiving could be done through a pipeline garbage-collector; there is no guarantee of immediacy. 69 | 70 | ## Un-archiving 71 | 72 | A pipeline will become un-archived when its pipeline is set once again, either through `fly set-pipeline` or through the `set_pipeline` step. This ensures that there is a valid configuration when unarchiving. 73 | 74 | Note that when a pipeline is un-archived through `fly set-pipeline`, it will paused, but if a pipeline is un-archived through the `set_pipeline` step, it will be unpaused. This is the same behavior as with a newly configured pipeline. 75 | 76 | Because a pipeline becomes un-archived and re-configured in one fell swoop, it's possible that a user may unknowingly "reclaim" an old, archived, unrelated pipeline when really they just want to use the name again for a different pipeline. 77 | 78 | To make the behavior explicit, a prompt will be added to `fly set-pipeline` when it detects that the user is configuring an existing but archived pipeline. This is easy to detect, because `fly set-pipeline` already fetches the existing pipeline for diffing purposes. If `fly set-pipeline` is run with `--non-interactive`, the pipeline will be configured and unarchived without a prompt. 79 | 80 | The `set_pipeline` step's behavior will be consistent with `fly set-pipeline --non-interactive` as long as the archived pipeline was originally configured by the same job. This way things will "just work" in the happy path of commenting-out and then uncommenting a `set_pipeline` step. If the `set_pipeline` step notices that it's configuring an archived pipeline configured by a *different* job, or by no job, it will fail. The user will have to either rename or destroy the archived pipeline. 81 | 82 | 83 | # Open Questions 84 | 85 | n/a 86 | -------------------------------------------------------------------------------- /034-instanced-pipelines/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#33](https://github.com/concourse/rfcs/pull/33) 2 | * Concourse Issue: [concourse/concourse#5808](https://github.com/concourse/concourse/issues/5808) 3 | 4 | # Summary 5 | 6 | Instanced pipelines group together pipelines which share a common template configured with different ((vars)). They provide a simple two-level hierarchy and automatic archiving of instances which are no longer needed. 7 | 8 | # Proposal 9 | 10 | Pipelines can be configured with 'instance vars' like so: 11 | 12 | ```sh 13 | fly set-pipeline -p branch --instance-var branch=feature/foo 14 | ``` 15 | 16 | These may also be specified on the [`set_pipeline` step][set-pipeline-rfc] like so: 17 | 18 | ```yaml 19 | set_pipeline: branch 20 | instance_vars: {branch: feature/foo} 21 | ``` 22 | 23 | Both of the above examples will configure a `branch` pipeline, with the `((branch))` var set to `"feature/foo"`. 24 | 25 | Instance vars are used as part of the pipeline identifier in the UI and API. There can be multiple instances of a pipeline with the same name: 26 | 27 | ```sh 28 | fly set-pipeline -p branch --instance-var branch=feature/foo 29 | fly set-pipeline -p branch --instance-var branch=feature/bar 30 | ``` 31 | 32 | Instanced pipelines sharing the same name will be grouped together in the web UI. 33 | 34 | An individual instance of a pipeline can be manually destroyed, paused, and archived ([RFC #33](https://github.com/concourse/rfcs/pull/33)): 35 | 36 | ```sh 37 | fly destroy-pipeline -p branch -i branch:feature/foo 38 | fly pause-pipeline -p branch -i branch:feature/foo 39 | fly archive-pipeline -p branch -i branch:feature/foo 40 | ``` 41 | 42 | (Side note: `:` vs. `=` is a little weird but it's consistent with `fly check-resource` - we use `=` for assignment and `:` for partial filtering.) 43 | 44 | ## Automatic archival 45 | 46 | Instanced pipelines build on the automatic pipeline archiving introduced in [RFC #33](https://github.com/concourse/rfcs/pull/33). Individual pipeline instances that are no longer configured will be automatically archived in the same way that normal pipelines would. 47 | 48 | For example, say I have a job whose build plan configures a pipeline instance for each supported version: 49 | 50 | ```yaml 51 | plan: 52 | - get: ci 53 | - set_pipeline: release 54 | file: ci/pipelines/release.yml 55 | instance_vars: 56 | version: 5.3 57 | - set_pipeline: release 58 | file: ci/pipelines/release.yml 59 | instance_vars: 60 | version: 5.2 61 | ``` 62 | 63 | Let's say I ship a `5.5` version, and my policy is to only support the last 2 versions. I would update the config like so: 64 | 65 | 66 | ```yaml 67 | plan: 68 | - get: ci 69 | - set_pipeline: release 70 | file: ci/pipelines/release.yml 71 | instance_vars: 72 | version: 5.4 73 | - set_pipeline: release 74 | file: ci/pipelines/release.yml 75 | instance_vars: 76 | version: 5.3 77 | ``` 78 | 79 | When this build runs, the `version: 5.2` instance will be automatically archived. 80 | 81 | 82 | # New Implications 83 | 84 | This functionality will be more and more useful as we expand Concourse's vocabulary to support pipeline automation. Spatial resources ([RFC #29](https://github.com/concourse/rfcs/pull/29)), for example, can be used to automatically configure a pipeline for each branch or PR. When the branch or PR goes away, their pipeline instances will be archived automatically. 85 | 86 | 87 | [set-pipeline-rfc]: https://github.com/concourse/rfcs/pull/31 88 | -------------------------------------------------------------------------------- /037-prototypes/proposal.md: -------------------------------------------------------------------------------- 1 | # Concourse Prototype Protocol 2 | 3 | This proposal outlines a small protocol for invoking conceptually related 4 | commands in a container image with a JSON-based request-response. This protocol 5 | is to be used as a foundation for both a new resource type interface and as a 6 | way to package and reuse arbitrary functionality (i.e. tasks). 7 | 8 | The protocol is versioned, starting at `1.0`. 9 | 10 | An implementation of this protocol is called a **prototype**. ([Why 11 | 'prototype'?](#etymology)) Conceptually, a prototype handles **messages** sent 12 | to **objects** which are provided by the user as configuration or produced by 13 | the prototype itself in response to a prior message. 14 | 15 | For example, a `git` prototype may handle a `check` message against a 16 | repository object to produce commit objects, which then can then handle a `get` 17 | message to fetch the commit. 18 | 19 | An object's supported messages are discovered through an [Info 20 | request](#prototype-info). For example, the `git` prototype may support `get` 21 | against a commit object, but not against a repository object. The Info request 22 | is also a good place to perform validation. 23 | 24 | A message is handled by invoking its corresponding command in the container 25 | with a [Message request](#prototype-message). A message handler responds by 26 | emitting objects and accompanying metadata. These objects are written to a 27 | response path specified by the request. 28 | 29 | While a prototype can handle any number of messages with any name, certain 30 | messages names will have semantic meaning in a Concourse pipeline. For example, 31 | a prototype which supports `check` and `get` messages can be used as a resource 32 | in a pipeline. These messages will be sent by the Concourse pipeline scheduler 33 | if a user configures the prototype and config as a resource. 34 | 35 | 36 | ## Previous Discussions 37 | 38 | * [RFC #24][rfc-24] was a previous iteration of this proposal. It had a fixed 39 | set of "actions" (`check`, `get`, `put`, `delete`) and was still centered in 40 | the 'resources' concept and terminology. This RFC generalizes the design 41 | further, switches to prototype-based terminology and allows prototypes to 42 | handle arbitrary messages. 43 | 44 | * [RFC #1][rfc-1], now defunct, is similar to this proposal but had a concept 45 | of "spaces" baked into the interface. This concept has been decoupled from 46 | the resource interface and will be re-introduced in a later RFC that is 47 | compatible with v1 and v2 resources. 48 | * **Recommended reading**: [this comment][rfc-1-comment] outlines the thought 49 | process that led to this RFC. 50 | 51 | * [concourse/concourse#534](https://github.com/concourse/concourse/issues/534) 52 | was the first 'new resource interface' proposal which pre-dated the RFC 53 | process. 54 | 55 | 56 | ## Motivation 57 | 58 | * Support for 'reusable tasks' in order to stop the shoehorning of 59 | functionality into the resource interface: 60 | [concourse/rfcs#7](https://github.com/concourse/rfcs/issues/7). 61 | 62 | * Support for creating multiple versions from `put`: 63 | [concourse/concourse#2660](https://github.com/concourse/concourse/issues/2660) 64 | 65 | * Support for deleting versions: 66 | [concourse/concourse#362](https://github.com/concourse/concourse/issues/362), 67 | [concourse/concourse#524](https://github.com/concourse/concourse/issues/524) 68 | 69 | * Having resource metadata immediately available via check: 70 | [concourse/git-resource#193](https://github.com/concourse/git-resource/issues/193), 71 | [concourse/concourse#1714](https://github.com/concourse/concourse/issues/1714) 72 | 73 | * Unifying `source` and `params` as just `config` so that resources don't have 74 | to care where configuration is being set in pipelines: 75 | [concourse/git-resource#172](https://github.com/concourse/git-resource/pull/172), 76 | [concourse/bosh-deployment-resource#13](https://github.com/concourse/bosh-deployment-resource/issues/13), 77 | [concourse/bosh-deployment-resource#6](https://github.com/concourse/bosh-deployment-resource/issues/6), 78 | [concourse/cf-resource#20](https://github.com/concourse/cf-resource/pull/20), 79 | [concourse/cf-resource#25](https://github.com/concourse/cf-resource/pull/25), 80 | [concourse/git-resource#210](https://github.com/concourse/git-resource/pull/210) 81 | 82 | * Make resource actions reentrant so that we no longer receive `unexpected EOF` 83 | errors when reattaching to an in-flight build whose resource action completed 84 | while we weren't attached: 85 | [concourse/concourse#1580](https://github.com/concourse/concourse/issues/1580) 86 | 87 | * Support for showing icons for resources in the web UI: 88 | [concourse/concourse#788](https://github.com/concourse/concourse/issues/788), 89 | [concourse/concourse#3220](https://github.com/concourse/concourse/pull/3220), 90 | [concourse/concourse#3581](https://github.com/concourse/concourse/pull/3581) 91 | 92 | * Standardize TLS configuration so every resource doesn't implement their own 93 | way: [concourse/rfcs#9](https://github.com/concourse/rfcs/issues/9) 94 | 95 | 96 | ## Glossary 97 | 98 | * **Prototype**: an implementation of the interface defined by this proposal, 99 | typically packaged as an OCI container image. 100 | 101 | Examples: 102 | 103 | * a `git` prototype for interacting with commits and branches in a repo. 104 | * a `time` prototype for performing interval triggers. 105 | 106 | * **Message**: an operation to perform against the object. Corresponds to a 107 | command to run in the container image. 108 | 109 | Examples: 110 | 111 | * `check` 112 | * `get` 113 | 114 | * **Object**: a logical entity, encoded as a JSON object, acted on and emitted 115 | by a prototype in response to messages. 116 | 117 | Examples: 118 | 119 | * `{"uri":"https://github.com/concourse/rfcs"}` 120 | * `{"ref":"e4be0b367d7bd34580f4842dd09e7b59b6097b25"}` 121 | 122 | * **Metadata**: additional information about an object to surface to users. 123 | 124 | Examples: 125 | 126 | * `[{"name":"committer","value":"Alex Suraci"}]` 127 | 128 | * **Bits**: a directory containing arbitrary data. 129 | 130 | Examples: 131 | 132 | * source code checked out from a repo 133 | * compiled artifacts created by a task or another prototype 134 | 135 | * **Resource**: a **prototype** which implements the following messages: 136 | 137 | * `check`: discover objects representing versions, in order 138 | * `get`: fetch an object version 139 | 140 | 141 | ## Interface Types 142 | 143 | ```go 144 | // Object is an object receiving messages and being emitted in response to 145 | // messages. 146 | type Object map[string]interface{} 147 | 148 | // InfoRequest is the payload written to stdin for the `./info` script. 149 | type InfoRequest struct { 150 | // The object to act on. 151 | Object Object `json:"object"` 152 | 153 | // Path to a file into which the prototype must write its InfoResponse. 154 | ResponsePath string `json:"response_path"` 155 | } 156 | 157 | // InfoResponse is the payload written to the `response_path` in response to an 158 | // InfoRequest. 159 | type InfoResponse struct { 160 | // The version of the prototype interface that this prototype conforms to. 161 | InterfaceVersion string `json:"interface_version"` 162 | 163 | // An optional icon to show to the user. 164 | // 165 | // Icons must be namespaced by in order to explicitly reference an icon set 166 | // supported by Concourse, e.g. 'mdi:' for Material Design Icons. 167 | Icon string `json:"icon,omitempty"` 168 | 169 | // The messages supported by the object. 170 | Messages []string `json:"messages,omitempty"` 171 | } 172 | 173 | // MessageRequest is the payload written to stdin for a message. 174 | type MessageRequest struct { 175 | // The object to act on. 176 | Object Object `json:"object"` 177 | 178 | // Configuration for establishing TLS connections. 179 | TLS TLSConfig `json:"tls,omitempty"` 180 | 181 | // A base64-encoded 32-byte encryption key for use with AES-GCM. 182 | Encryption EncryptionConfig `json:"encryption,omitempty"` 183 | 184 | // Path to a file into which the message handler must write its MessageResponses. 185 | ResponsePath string `json:"response_path"` 186 | } 187 | 188 | // MessageResponse is written to the `response_path` for each object returned 189 | // by the message. Multiple responses may be written to the same file, 190 | // concatenated as a JSON stream. 191 | type MessageResponse struct { 192 | // The object. 193 | Object Object `json:"object"` 194 | 195 | // Encrypted fields of the object. 196 | Encrypted EncryptedObject `json:"encrypted"` 197 | 198 | // Metadata to associate with the object. Shown to the user. 199 | Metadata []MetadataField `json:"metadata,omitempty"` 200 | } 201 | 202 | // TLSConfig captures common configuration for communicating with servers over 203 | // TLS. 204 | type TLSConfig struct { 205 | // An array of CA certificates to trust. 206 | CACerts []string `json:"ca_certs,omitempty"` 207 | 208 | // Skip certificate verification, effectively making communication insecure. 209 | SkipVerification bool `json:"skip_verification,omitempty"` 210 | } 211 | 212 | type EncryptionConfig struct { 213 | // The encryption algorithm for the prototype to use. 214 | // 215 | // This value will be static, and changing it will imply a major bump to the 216 | // Prototype protocol version. It is included here as a helpful indicator so 217 | // that prototype authors don't have to guess at the payload. 218 | Algorithm string `json:"algorithm"` 219 | 220 | // A base64-encoded 32-length key, unique to each message. 221 | Key []byte `json:"key"` 222 | } 223 | 224 | // EncryptedObject contains an AES-GCM encrypted JSON payload containing 225 | // additional fields of the object. 226 | type EncryptedObject struct { 227 | // The base64-encoded encrypted payload. 228 | Payload []byte `json:"payload"` 229 | 230 | // The base64-encrypted nonce. 231 | Nonce []byte `json:"nonce"` 232 | } 233 | 234 | // MetadataField represents a named bit of metadata associated to an object. 235 | type MetadataField struct { 236 | Name string `json:"name"` 237 | Value string `json:"value"` 238 | } 239 | ``` 240 | 241 | 242 | ## Prototype Info 243 | 244 | Prior to sending any message, the default command (i.e. 245 | [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd)) for the image 246 | will be executed with an `InfoRequest` piped to `stdin`. This request contains 247 | an **object**. 248 | 249 | The command must write an `InfoResponse` to the file path specified by 250 | `response_path` in the `InfoRequest`. This response specifies the prototype 251 | interface version that the prototype conforms to, an optional icon to show in 252 | the UI, and the messages supported by the given object. 253 | 254 | ### Example **info** request/response 255 | 256 | Request sent to `stdin`: 257 | 258 | ```json 259 | { 260 | "object": { 261 | "uri": "https://github.com/concourse/concourse" 262 | }, 263 | "response_path": "../info/response.json" 264 | } 265 | ``` 266 | 267 | Response written to `../info/response.json`: 268 | 269 | ```json 270 | { 271 | "interface_version": "1.0", 272 | "icon": "mdi:github-circle", 273 | "messages": ["check"] 274 | } 275 | ``` 276 | 277 | 278 | ## Prototype Messages 279 | 280 | A message handler is invoked by executing the command named after the message 281 | with a JSON-encoded `MessageRequest` piped to `stdin`. This request contains 282 | the **object** which is conceptually receiving the message. 283 | 284 | All message commands will be run in a working directory layout containing the 285 | **bits** provided to the message handler, as well as empty directories to which 286 | bits should be written by the message handler. 287 | 288 | A message handler writes its `MessageResponse`(s) to the file path specified by 289 | `response_path` in the `MessageRequest`. This path may be relative to the 290 | initial working directory. 291 | 292 | How this response is interpreted depends on the message, but typically there 293 | should be one response for each object affected (`put`, `delete`), discovered 294 | (`check`), or fetched (`get`). 295 | 296 | ### Example **message** request/response 297 | 298 | Request sent to `stdin`: 299 | 300 | ```json 301 | { 302 | "object": { 303 | "uri": "https://github.com/concourse/rfcs", 304 | "branch": "master" 305 | }, 306 | "response_path": "../response/response.json" 307 | } 308 | ``` 309 | 310 | Response written to `../response/response.json`: 311 | 312 | ```json 313 | { 314 | "object": {"ref": "e4be0b367d7bd34580f4842dd09e7b59b6097b25"}, 315 | "metadata": [ { "name": "message", "value": "init" } ] 316 | } 317 | { 318 | "object": {"ref": "5a052ba6438d754f73252283c6b6429f2a74dbff"}, 319 | "metadata": [ { "name": "message", "value": "add not-very-useful-yet readme" } ] 320 | } 321 | { 322 | "object": {"ref": "2e256c3cb4b077f6fa3c465dd082fa74df8fab0a"}, 323 | "metadata": [ { "name": "message", "value": "start fleshing out RFC process" } ] 324 | } 325 | ``` 326 | 327 | This response would be typical of a `check` that ran against a `git` repository 328 | that had three commits. 329 | 330 | 331 | ## Encryption 332 | 333 | In order to use Prototypes for credential acquisition, there must be a way to 334 | return object attributes which contain sensitive data without writing the data 335 | to disk in plaintext. 336 | 337 | A Prototype's `MessageRequest` may contain an `EncryptionConfig` which 338 | specifies the encryption algorithm to use and any other necessary data for use 339 | with the algorithm (such as a key). 340 | 341 | The Prototypes protcol will only support one encryption algorithm at a time, 342 | and if it needs to be changed, this will imply a **major** bump to the protocol 343 | version. This is to encourage phasing out support for no-longer-adequate 344 | security algorithms. 345 | 346 | The decision as to which algorithm to use for the initial version is currently 347 | an open question, but the most likely candidate right now is AES-GCM, which is 348 | the same algorithm used for database encryption in Concourse. Another candidate 349 | may be NaCL. Note that whichever one we choose must be available in various 350 | languages so that Prototype authors aren't restricted to any particular library 351 | or language. 352 | 353 | Assuming AES-GCM, the `EncryptionConfig` in the request will include a `key` 354 | field containing a base64-encoded 32-length key, and a `nonce_size` field 355 | indicating the size of the nonce necessary to encrypt/decrypt: 356 | 357 | ```json 358 | { 359 | "object": { 360 | "uri": "https://vault.example.com", 361 | "client_token": "01234567889abcdef" 362 | }, 363 | "encryption": { 364 | "algorithm": "AES-GCM", 365 | "key": "aXzsY7eK/Jmn4L36eZSwAisyl6Q4LPFIVSGEE4XH0hA=", 366 | "nonce_size": 12 367 | } 368 | } 369 | ``` 370 | 371 | 372 | It is the responsibility of the Prototype implementation to generate a nonce 373 | value of the specified length. The Prototype would then marshal a JSON object 374 | containing fields to be encrypted, encrypt it with the key and nonce, and 375 | return the encrypted payload along with the nonce value in a `EncryptedObject` 376 | in the `MessageResponse` - both as base64-encoded values: 377 | 378 | ```json 379 | { 380 | "object": { 381 | "public": "fields" 382 | }, 383 | "encrypted": { 384 | "nonce": "6rYKFHXh43khqsVs", 385 | "payload": "St5pRZumCx75d2x2s3vIjsClUi9DqgnIoG2Slt2RoCvz" 386 | } 387 | } 388 | ``` 389 | 390 | The encrypted payload above is `{"some":"secret"}`, so the above response 391 | ultimately describes the following object: 392 | 393 | ```json 394 | { 395 | "public": "fields", 396 | "some": "secret" 397 | } 398 | ``` 399 | 400 | ### Rationale for encryption technique 401 | 402 | A few alternatives to this approach were considered: 403 | 404 | * Initially, the use of HTTPS was appealing as it would avoid placing data on 405 | disk entirely. 406 | 407 | However this is a much more complicated architecture that raises many more 408 | questions: 409 | 410 | * What are the API endpoints? 411 | * How are the requests routed? 412 | * How is TLS configured? 413 | * How is the response delivered? 414 | * Regular HTTP response? If the `web` node detaches, how can we re-attach? 415 | * Callbacks? What happens when the callback endpoint is down? 416 | * Is it safe to retry requests? 417 | 418 | * We could write the responses to `tmpfs` to ensure they only ever exist in 419 | RAM. 420 | 421 | The main downside is that operators would have to make sure that swap is 422 | disabled so that data is never written to disk. This seems like a safe 423 | assumption for Kubernetes but seems like an easy mistake to make for 424 | operators managing their own VMs. 425 | 426 | One advantage of the current approach is that it tells Concourse which fields 427 | can be public and which fields contain sensitive information, while requiring 428 | the sensitive information to be encrypted. 429 | 430 | While this could be supported either way by specifying a field such as 431 | `expose":["public"]`, it seems valuable to force Prototype authors to "do the 432 | right thing" and encrypt the sensitive data, rather than allowing them to 433 | simply hide it from the UI. 434 | 435 | 436 | ## Object Cloning 437 | 438 | Technically, all parts of the prototocol have been specified, but there is a 439 | particular usage pattern that is worth outlining here as it relates to the 440 | 'prototype' terminology and may be common across a few use cases for this 441 | protocol. 442 | 443 | **Objects** are just arbitrary JSON data. This data is consumed by a prototype 444 | action and may be emitted by the prototype in response to a message. 445 | 446 | One example of a message that will yield new objects is `check` (part of the 447 | [resource interface](https://github.com/concourse/rfcs/pull/26)). This message 448 | will be sent to prototypes that are used as a resource in a Concourse pipeline. 449 | 450 | Per the resource interface, the `check` message returns an object for each 451 | version. However, these objects aren't self-contained - instead, they are an 452 | implicit *clone* of the original object, with the returned object's fields 453 | merged in. 454 | 455 | For example, the `git` prototype may be initially receive the following 456 | user-configured properties in a `check` message: 457 | 458 | ```json 459 | { 460 | "object": { 461 | "uri": "https://github.com/concourse/rfcs", 462 | "branch": "master" 463 | } 464 | } 465 | ``` 466 | 467 | Let's say the `check` handler responds with the following objects, representing 468 | two commits in the repository (with `metadata` omitted for brevity): 469 | 470 | ```json 471 | { 472 | "object": { 473 | "ref": "e4be0b367d7bd34580f4842dd09e7b59b6097b25" 474 | } 475 | } 476 | { 477 | "object": { 478 | "ref": "5a052ba6438d754f73252283c6b6429f2a74dbff" 479 | } 480 | } 481 | ``` 482 | 483 | Per the resource interface, these object properties would be saved as versions 484 | of the resource in the order returned by `check`. 485 | 486 | When it comes time to fetch a version in a build, the `git` prototype would 487 | receive a `get` message with the following object: 488 | 489 | ```json 490 | { 491 | "object": { 492 | "uri": "https://github.com/concourse/rfcs", 493 | "branch": "master", 494 | "ref": "e4be0b367d7bd34580f4842dd09e7b59b6097b25" 495 | } 496 | } 497 | ``` 498 | 499 | Note that the object properties have been merged into the original set of 500 | properties. 501 | 502 | ## Pipeline Usage 503 | 504 | A new `run` step type will be introduced. It takes the name of a message to 505 | send, and a `type` denoting the prototype to use: 506 | 507 | ```yaml 508 | prototypes: 509 | - name: some-prototype 510 | type: registry-image 511 | source: 512 | repository: vito/some-prototype 513 | 514 | jobs: 515 | - name: run-prototype 516 | plan: 517 | - run: some-message 518 | type: some-prototype 519 | ``` 520 | 521 | So far this example is not very useful. There aren't many things I can think to 522 | run that don't take any inputs or configuration or produce any outputs. Let's 523 | try giving it an input and collecting its resulting output. This can be done 524 | with `inputs`/`input_mapping` and `outputs`/`output_mapping`: 525 | 526 | ```yaml 527 | prototypes: 528 | - name: oci-image 529 | type: registry-image 530 | source: 531 | repository: vito/oci-image-prototype 532 | 533 | resources: 534 | - name: my-image-src 535 | type: git 536 | source: 537 | uri: https://example.com/my-image-src 538 | 539 | - name: my-image 540 | type: registry-image 541 | source: 542 | repository: example/my-image 543 | username: some-username 544 | password: some-password 545 | 546 | jobs: 547 | - name: build-and-push 548 | plan: 549 | - get: my-image-src 550 | - run: build 551 | type: oci-image 552 | input_mapping: {context: my-image-src} 553 | outputs: [image] 554 | - put: my-image 555 | params: {image: image/image.tar} 556 | ``` 557 | 558 | This example also makes use of resources, which are also implemented as 559 | prototypes with special pipeline pipeline semantics and step syntax. These 560 | "resource prototypes" are described in [RFC #38][rfc-38]. 561 | 562 | 563 | ## Out of scope 564 | 565 | * [Richer metadata](https://github.com/concourse/concourse/issues/310) - this 566 | hasn't gained much traction and probably needs more investigation before it 567 | can be incorporated. This should be easy enough to add as a later RFC. 568 | 569 | 570 | ## Etymology 571 | 572 | * The root of the name comes from [Prototype-based 573 | programming](https://en.wikipedia.org/wiki/Prototype-based_programming), a 574 | flavor of object-oriented programming, which the concepts introduced in this 575 | proposal mirror somewhat closely. There are also some hints of actor-based 576 | programming - actors handling messages and emitting more actors - but that 577 | felt like more of a stretch. 578 | 579 | This is also an opportunity to frame Concourse as "object-oriented CI/CD," if 580 | we want to. 581 | 582 | * We've been conflating the terms 'resource' and 'resource type' pretty 583 | everywhere which seems like it would be confusing to newcomers. For example, 584 | all of our repos are typically named 'X-resource' when really the repo 585 | provides a resource *type*. 586 | 587 | Adopting a new name which still has the word 'type' in it feels like it fixes 588 | this problem, while preserving 'resource' terminology for its original use 589 | case (pipeline resources). 590 | 591 | * It's a pun: 'protocol type' => 'prototype'. 592 | 593 | * 'Prototype' has a pretty nifty background in aerospace engineering. This is a 594 | completely different meaning, but we never use it that way, so it doesn't 595 | seem like there should be much cause for confusion. 596 | 597 | 598 | ## Open Questions 599 | 600 | * Is this terminology unrelatable? 601 | 602 | 603 | [rfc-1]: https://github.com/concourse/rfcs/pull/1 604 | [rfc-1-comment]: https://github.com/concourse/rfcs/pull/1#issuecomment-477749314 605 | [rfc-24]: https://github.com/concourse/rfcs/pull/24 606 | [rfc-38]: https://github.com/concourse/rfcs/pull/38 607 | -------------------------------------------------------------------------------- /038-resource-prototypes/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#38](https://github.com/concourse/rfcs/38) 2 | * Concourse Issue: [concourse/concourse#5870](https://github.com/concourse/concourse/issues/5870) 3 | 4 | # Resource Prototypes 5 | 6 | A *resource prototype* is an interface implemented by a prototype which allows 7 | it to be used as a *resource* in a pipeline. 8 | 9 | 10 | ## Motivation 11 | 12 | * Support for creating multiple versions from `put`: [concourse/concourse#2660](https://github.com/concourse/concourse/issues/2660) 13 | 14 | * Support for deleting versions: [concourse/concourse#362](https://github.com/concourse/concourse/issues/362), [concourse/concourse#524](https://github.com/concourse/concourse/issues/524) 15 | 16 | * Make the `get` after `put` opt-in: [concourse/concourse#3299](https://github.com/concourse/concourse/issues/3299), [concourse/registry-image-resource#16](https://github.com/concourse/registry-image-resource/issues/16) 17 | 18 | 19 | ## Proposal 20 | 21 | A resource prototype implements the following messages: 22 | 23 | * `check` 24 | * `get` 25 | * `put` (optional) 26 | * `delete` (optional) 27 | 28 | Their behavior is described as follows. 29 | 30 | ### `check`: discover new versions of a resource 31 | 32 | Initially, the `check` handler will be invoked with an object containing only 33 | the fields configured by the resource's `source:` in the pipeline. A 34 | `MessageResponse` must be emitted for *all* versions available in the source, 35 | in chronological order. 36 | 37 | Subsequent `check` calls will be run with a clone of the configuration object 38 | with the last emitted version object merged in. The `check` handler must emit 39 | the specified version if it still exists, followed by all versions detected 40 | after it in chronological order. 41 | 42 | If the specified version is no longer present, the `check` handler must instead 43 | emit all available versions, as if the version was not specified. Concourse 44 | will detect this scenario by noticing that the first version emitted does not 45 | match the requested version. The given version, along with any other versions 46 | that are not present in the returned set of versions, will be marked as deleted 47 | and no longer be available. 48 | 49 | The `check` handler can use the **bits** directory to cache state between runs. 50 | On the first run, the directory will be empty. 51 | 52 | There is no `check` step syntax. Concourse will `check` every configured 53 | resource and maintain its version history. 54 | 55 | ### `get`: fetch a version of a resource 56 | 57 | The `get` handler will be invoked with an object specifying the version to 58 | fetch. 59 | 60 | The `get` handler must fetch the resource into an output named `resource` under 61 | the **bits** directory. 62 | 63 | A `MessageResponse` must be emitted for all versions that have been fetched 64 | into the bits directory. Each version will be recorded as an input to the 65 | build. 66 | 67 | When a `get` handler is invoked by a `get` step in a build plan, the output 68 | will be mapped to the resource's name in the pipeline: 69 | 70 | ```yaml 71 | resources: 72 | - name: some-resource 73 | type: git 74 | source: 75 | uri: https://example.com/some-repo 76 | 77 | jobs: 78 | - name: get-and-test 79 | plan: 80 | - get: some-resource 81 | - task: unit 82 | file: some-resource/ci/unit.yml 83 | ``` 84 | 85 | Assuming the latest commit of `some-resource` is `abcdef`, this would be 86 | similar to the following build plan: 87 | 88 | ```yaml 89 | jobs: 90 | - name: get-and-test 91 | plan: 92 | - run: get 93 | type: git 94 | params: 95 | uri: https://example.com/some-repo 96 | ref: abcdef 97 | output_mapping: 98 | resource: some-resource 99 | - task: unit 100 | file: some-resource/ci/unit.yml 101 | ``` 102 | 103 | The bits fetched by a `get` step may be cached so that the same version does 104 | not have to be fetched repeatedly. 105 | 106 | 107 | ### `put`: idempotently create resource versions 108 | 109 | The `put` handler will be invoked with user-provided configuration and 110 | arbitrary bits. 111 | 112 | A `MessageResponse` must be emitted for all versions that have been created/updated. 113 | 114 | When a `put` step is used in a build plan, each version emitted will be 115 | recorded as an output of the build. 116 | 117 | A `get` field may be explicitly added to the step, specifying a name to fetch 118 | the last-emitted version as: 119 | 120 | ```yaml 121 | jobs: 122 | - name: push-pull-resource 123 | plan: 124 | # put to 'my-resource', and then get the last emitted version object as 125 | # 'some-name' for use later in the build plan 126 | - put: my-resource 127 | get: some-name 128 | ``` 129 | 130 | This replaces the "implicit `get` after `put`" behavior, which will remain for 131 | resources provided by `resource_types:` for backwards-compatibility. 132 | 133 | 134 | ### `delete`: idempotently destroy resource versions 135 | 136 | The `delete` handler will be invoked with user-provided configuration and 137 | arbitrary bits. 138 | 139 | A `MessageResponse` must be emitted for all versions that have been destroyed. 140 | 141 | When a `delete` step is used in a build plan, each version emitted will be 142 | marked as "deleted" and no longer be available for use in other builds. 143 | 144 | ```yaml 145 | jobs: 146 | - name: prune-release-candidates 147 | plan: 148 | - delete: concourse-rc 149 | params: {regexp: concourse-[0-9]+\.[0-9]+\.[0-9]+-rc.[0-9]+.tgz} 150 | ``` 151 | 152 | ## Migration Path 153 | 154 | Concourse pipelines will support `resource_types:` and `prototypes:` 155 | side-by-side. 156 | 157 | Resources backed by a resource type will retain all of today's behavior for 158 | backwards-compatibility. We don't want to force a big-bang migration of every 159 | Concourse user's pipelines. 160 | 161 | Resources backed by a prototype will gain all the behavior described in this 162 | proposal. Pipeline authors are encouraged to transition from resource types to prototypes, which they can do gradually. 163 | 164 | There is no EOL date specified for resource types support. They would likely be 165 | supported for a number of years. 166 | 167 | ## Open Questions 168 | 169 | n/a 170 | 171 | ## Answered Questions 172 | 173 | * [Version filtering is best left to `config`.](https://github.com/concourse/concourse/issues/1176#issuecomment-472111623) 174 | * [Resource-determined triggerability of versions](https://github.com/concourse/rfcs/issues/11) will be addressed by the "trigger resource" RFC. 175 | * Webhooks are left out of this first pass of the interface. I would like to investigate alternative approaches before baking it in. 176 | * For example, could Concourse itself integrate with these services and map webhooks to resource checks intelligently? 177 | -------------------------------------------------------------------------------- /039-var-sources/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#39](https://github.com/concourse/rfcs/pull/39) 2 | * Concourse Issue: [concourse/concourse#5813](https://github.com/concourse/concourse/issues/5813) 3 | 4 | # Summary 5 | 6 | Introduces `var_sources`, a way for pipelines to configure multiple named 7 | credential managers - and in the future, arbitrary 8 | [Prototype-based][prototypes-rfc] sources of `((vars))`. 9 | 10 | 11 | # Motivation 12 | 13 | Concourse currently supports configuring a single credential manager for the 14 | entire cluster. This is limiting in a number of ways. 15 | 16 | ## Rigid path lookup schemes 17 | 18 | With auth for the credential manager configured at the system-level, each 19 | credential manager has to support some form of multi-tenancy so that team A 20 | cannot access team B's secrets. 21 | 22 | The current strategy is to encode team and pipeline names in the 23 | paths/identifiers for the credentials that are looked up, but this has many 24 | downsides: 25 | 26 | * Naming schemes are a brittle approach to the security of untrusted 27 | multi-tenant installations; it relies on the credential manager's 28 | identifiers to have a valid separator character that is also not allowed by 29 | Concourse team and pipeline names. This makes it impossible to support 30 | certain credential managers, e.g. Azure KeyVault which only allows 31 | `[a-z\-]+`. 32 | 33 | * Forcing team names into the credential identifier makes it impossible to 34 | share credentials between teams. Instead the credential has to be 35 | duplicated under each team's path. This is a shame because credential 36 | managers like Vault have full-fledged support for ACLs. 37 | 38 | * With Vault, enforcing a path scheme makes it impossible to use any backend 39 | except `kv` because different backends can't be mounted under paths managed 40 | by other backends. This removes a lot of the value of using Vault in the 41 | first place. 42 | 43 | ## "There can be only one" 44 | 45 | Only supporting a single credential manager really limits the possibilities of 46 | using credential managers for specialized use cases. 47 | 48 | A core tenent of Concourse's "Resources" concept is that their content, i.e. 49 | version history and bits, should be addressable solely by the resource's 50 | configuration. That is, given a resource's `type:` and `source:`, the same 51 | version history will be returned on any Concourse installation, and can 52 | therefore be de-duped and shared across teams within an installation. 53 | 54 | This tenent forbids relying on worker state for access control within a 55 | resource. Instead, resource types should only use their `source:`. 56 | 57 | This is problematic for resource types which make use of IAM roles associated 58 | to the worker EC2 instance that they run on in order to authenticate, because 59 | in this case the resource's `source:` does not actually include any 60 | credentials. As a result, we cannot safely enable [global 61 | resources][global-resources-opt-out] by default because these resources would 62 | share version history without even vetting their credentials. 63 | 64 | To resolve this issue, a var source could be implemented as a 65 | [Prototype][prototypes-rfc] that acquires credentials via EC2 IAM roles and 66 | then provides them to the `source:` configuration for a resource via 67 | `((vars))`. This way the `source:` configuration is still the source of 68 | truth, and we can still support worker-configured credentials. 69 | 70 | Tying back to this proposal, the above approach would be awkward to implement 71 | as a credential manager. With support for only a single credential manager, 72 | users would have to choose between using a general-purpose credential manager 73 | like Vault vs. a specialized use case such as EC2 IAM roles. 74 | 75 | If we introduce support for configuring multiple credential managers, and go 76 | beyond that to allowing them to be implemented at runtime via Prototypes, we 77 | can support all kinds of credential acquisition at once. 78 | 79 | 80 | # Proposal 81 | 82 | This proposal introduces a new kind of configuration: var sources. 83 | 84 | This name "var source" is chosen to build on the existing terminology around 85 | `((vars))` and to directly relate them to one another. 86 | 87 | Calling them "var sources" instead of "credential managers" also allows them 88 | to be used for things that aren't necessarily credentials. [RFC #27][rfc-27] 89 | introduces a way to trigger a job when a var changes, which can be used for 90 | per-job timed interval triggers. [RFC #29][rfc-29] introduces a way to run a 91 | step "across" all vars, which could be used to e.g. set a pipeline for each 92 | pull request. 93 | 94 | Var sources are specified at a pipeline-level, like so: 95 | 96 | ```yaml 97 | var_sources: 98 | - name: vault 99 | type: vault 100 | config: 101 | uri: https://vault.example.com 102 | # ... vault-specific config including auth/etc ... 103 | 104 | resources: # ... 105 | 106 | jobs: # ... 107 | ``` 108 | 109 | Each var source has a `name` which must be a [valid 110 | identifier][valid-identifier-rfc]. This is used to explicitly reference the 111 | source from `((vars))` syntax so that there is no ambiguity. See 112 | [`VAR_SOURCE_NAME`](#VAR_SOURCE_NAME). 113 | 114 | Currently, a var source's `type` specifies one of the supported credential 115 | managers, e.g. `vault`, `credhub`, or `kubernetes`. In the future, this will 116 | refer to a [Prototype][prototypes-rfc]. 117 | 118 | A var source's `config` is a "black box" to Concourse and is passed verbatim 119 | to the credential manager (or prototype). This configuration should include 120 | any credentials necessary for authenticating with the credential manager. 121 | 122 | A var source's `config` may use `((vars))` to obtain its own credentials, 123 | either using static templating, the system-level credential manager, or other 124 | var sources (see [Inter-dependent var 125 | sources](#inter-dependent-var-sources)). 126 | 127 | ## `((var))` syntax 128 | 129 | The `((var))` syntax was introduced a long while back and was never formally 130 | specified or documented. This RFC proposes a change to it so now's a good time 131 | to describe a spec. 132 | 133 | The full `((var))` syntax will be 134 | **`((VAR_SOURCE_NAME:SECRET_PATH.SECRET_FIELD))`**. 135 | 136 | * #### `VAR_SOURCE_NAME` 137 | 138 | The optional `VAR_SOURCE_NAME` segment specifies which named entry under 139 | `var_sources` to use for the credential lookup. If omitted (along with the 140 | `:`), the globally configured credential manager is used. 141 | 142 | A `VAR_SOURCE_NAME` must be a [valid identifier][valid-identifier-rfc]. 143 | 144 | * #### `SECRET_PATH` 145 | 146 | The required `SECRET_PATH` segment specifies the secret to be fetched. This can 147 | either be a single word (`foo`) or a path (`foo/bar` or `/foo/bar`), depending 148 | on what lookup schemes are supported by the credential manager. For example, 149 | Vault and CredHub have path semantics whereas Kubernetes and Azure KeyVault 150 | only support simple names. 151 | 152 | For secret paths which contain special characters such as `.` or `:`, 153 | a literal JSON string value can be specified: `((foo:"bar.baz".buzz))`. 154 | 155 | For credential managers which support path-based lookup, a `SECRET_PATH` 156 | without a leading `/` may be queried relative to a predefined set of path 157 | prefixes. This is how the Vault credential manager currently works; `foo` will 158 | be queried under `/concourse/(team name)/(pipeline name)/foo`. See [Path lookup 159 | rules](#path-lookup-rules) for more information. 160 | 161 | * #### `SECRET_FIELD` 162 | 163 | The optional `SECRET_FIELD` specifies a field on the fetched secret to read. If 164 | omitted, the credential manager may choose to read a 'default field' from the 165 | fetched credential, if it exists. For example, the Vault credential manager 166 | will return the value of the `value` field if present. This is useful for 167 | simple single-value credentials. 168 | 169 | ## Credential manager secret lookup rules 170 | 171 | Pipeline-level credential managers differ from globally-configured credential 172 | managers in one key way: they do not have to be limited to a particular path 173 | scheme. 174 | 175 | This means that credentials can be shared between teams, and credential manager 176 | specific settings such as ACLs may be utilized to securely share access to 177 | common credentials. 178 | 179 | Credential managers may still choose to have default path lookup schemes for 180 | convenience. This RFC makes no judgment call on this because the utility of 181 | this will vary between credential managers. 182 | 183 | ## Inter-dependent var sources 184 | 185 | Var source configuration tends to contain credentials, like so: 186 | 187 | ```yaml 188 | var_sources: 189 | - name: vault 190 | type: vault 191 | config: 192 | uri: https://vault.concourse-ci.org 193 | client_token: some-client-token 194 | ``` 195 | 196 | Naturally, `((vars))` would be used here so that the credential isn't 197 | hardcoded into the pipeline: 198 | 199 | ```yaml 200 | var_sources: 201 | - name: vault 202 | type: vault 203 | config: 204 | uri: https://vault.concourse-ci.org 205 | client_token: ((vault-client-token)) 206 | ``` 207 | 208 | Building on this, a var source could also use another var source in order to 209 | obtain its credentials: 210 | 211 | ```yaml 212 | var_sources: 213 | - name: k8s 214 | type: k8s 215 | config: {in_cluster: true} 216 | - name: vault 217 | type: vault 218 | config: 219 | uri: https://vault.concourse-ci.org 220 | client_token: ((k8s:vault-client-token)) 221 | ``` 222 | 223 | There is precedent for this type of behavior in `resource_types`, where one 224 | type can reference another type for its own `type`. 225 | 226 | Cycles can be avoided by having a var source 'ignore' itself when resolving its 227 | own config. This is the same way that cycles are handled with `resource_types`. 228 | 229 | Take the following example: 230 | 231 | ```yaml 232 | var_sources: 233 | - name: source-1 234 | type: source-1 235 | config: {foo: ((source-2:bar))} 236 | - name: source-2 237 | type: source-2 238 | config: 239 | config: {foo: ((source-1:bar))} 240 | ``` 241 | 242 | In this setup, rather than going into a loop, both var sources would fail to be 243 | configured. The `source-1` var source would fail because it can't find 244 | `source-1` when trying to resolve the config for `source-2`, and vice-versa. 245 | 246 | 247 | # Open Questions 248 | 249 | n/a 250 | 251 | 252 | # Answered Questions 253 | 254 | * > Assuming `var_sources` can be configured at the [project](https://github.com/concourse/rfcs/pull/32)-level in the future, how should they interact with pipeline-level `var_sources`? 255 | 256 | > Should we allow multiple var sources to be configured at the system-level? 257 | 258 | Let's avoid these concerns for the first pass as they just raise more questions around named var scoping and they're not proven necessary at the moment. 259 | 260 | * > What var sources can be used within a var source's ((config))? 261 | 262 | See [Inter-dependent var sources](#inter-dependent-var-sources). 263 | 264 | * > When and how often do we authenticate with each credential manager? If 265 | > you're using Vault with a periodic token, something will have to continuously 266 | > renew the token. 267 | 268 | The implementation maintains an auth loop for each configured var source, 269 | anonymously identified by their configuration. Var sources that are not used 270 | for a certain TTL are closed, terminating their auth loop. 271 | 272 | 273 | [global-resources-opt-out]: https://concourse-ci.org/global-resources.html#some-resources-should-opt-out 274 | [issue-3023]: https://github.com/concourse/concourse/issues/3023 275 | [projects-rfc]: https://github.com/concourse/rfcs/pull/32 276 | [valid-identifier-rfc]: https://github.com/concourse/rfcs/pull/40 277 | [prototypes-rfc]: https://github.com/concourse/rfcs/pull/37 278 | [rfc-27]: https://github.com/concourse/rfcs/pull/27 279 | [rfc-29]: https://github.com/concourse/rfcs/pull/29 280 | -------------------------------------------------------------------------------- /040-valid-identifiers/proposal.md: -------------------------------------------------------------------------------- 1 | * Revision: [1.1](#revision-1.1) 2 | * RFC PR: [concourse/rfcs#40](https://github.com/concourse/rfcs/pull/40) 3 | * Concourse Issue: [concourse/concourse#5810](https://github.com/concourse/concourse/issues/5810) 4 | 5 | # Summary 6 | 7 | Proposes a fairly limited set of valid characters for use in Concourse 8 | identifiers: 9 | 10 | * Team names 11 | * Pipeline names 12 | * Job names 13 | * Step names 14 | * [Var source][var-sources-rfc] names 15 | 16 | 17 | # Motivation 18 | 19 | Concourse is currently very permissive with identifiers. This is largely an 20 | oversight, as validation was simply never implemented at the beginning. 21 | 22 | Compounding on this oversight, users have configured pipelines with all sorts 23 | of special characters, either in the pipeline name itself or in jobs, 24 | resources, and other API objects defined within. 25 | 26 | Allowing arbitrary symbols makes it difficult for Concourse to support semantic 27 | notation in the CLI and elsewhere in the Concourse UX. For example, the `fly` 28 | CLI uses `PIPELINE/JOB` notation for flags specifying a job, but this would be 29 | ambiguous if your pipeline or job names allowed `/` in their name. For example, 30 | `foo/bar/baz` could either be (`foo/bar`, `baz`) or (`foo`, `bar/baz`). 31 | 32 | Allowing whitespace and capitalization along with `_` and `-` results in 33 | inconsistent naming conventions between Concourse users. A 'deploy to prod' job 34 | may be called any of the following: 35 | 36 | * `prod-deploy` 37 | * `deploy to prod` 38 | * `deploy-to-prod` 39 | * `deploy_to_prod` 40 | * `Deploy to prod` 41 | * `deploy to Prod` 42 | * `Deploy to Prod` 43 | 44 | Permitting so many different naming conventions makes the Concourse UX, which 45 | is largely text-based, feel inconsistent between different projects with 46 | different naming conventions. This inconsistency may seem insignficant to users 47 | who only use Concourse within a single team, but it will become more pronounced 48 | if/when the Concourse project introduces a central place to share re-usable 49 | pipeline templates and other configuration. 50 | 51 | Allowing spaces also makes it awkward to pass identifiers to the `fly` CLI, as 52 | they would have to be explicitly quoted so they're not parsed as separate 53 | arguments. 54 | 55 | 56 | # Proposal 57 | 58 | The success of tools like `go fmt` has shown that the engineers value global 59 | consistency and reduction of petty debates over personal stylistic preferences. 60 | 61 | In the spirit of consistency and simplicity, this proposal is to dramatically 62 | reduce the allowed character set for Concourse identifiers. 63 | 64 | The following characters will be permitted: 65 | 66 | * Non-uppercase Unicode letters (i.e. lowercase or letter with no uppercase). 67 | * Decimal numbers. 68 | * Hyphens (`-`) and underscores (`_`), as the canonical word separators. 69 | * Periods (`.`), in order to support domain names and version numbers. 70 | 71 | It's worth noting that both hyphens (`-`) and underscores (`_`) are allowed as 72 | word separators. While this may lead to the kind of fragmentation this proposal 73 | aims to prevent, allowing both is more pragmatic than forbidding either: `-` is 74 | already commonplace, while `_` is more consistent with config params like 75 | `resource_types` and is commonly used with other tools and naming conventions 76 | (e.g. `x86_64`). The first iteration of this proposal did not allow the use of 77 | underscore; see [Revision 1.1](#revision-1.1) for details. 78 | 79 | All identifiers must start with a valid letter. Allowing digits or symbols at 80 | the beginning would allow for a few confusing situations: 81 | 82 | * Allowing `-` at the start would make the identifier look like a flag, 83 | confusing the `fly` CLI parser. 84 | * Allowing `.` at the start would permit strange names like `.` and `..` which 85 | may look like paths. 86 | * Allowing numbers at the start would make `123` a valid identifier, which 87 | would parse as a number in YAML instead of a string. 88 | 89 | With Go's [`re2`](https://github.com/google/re2/wiki/Syntax) syntax, a valid 90 | identifier would be matched by the following regular expression: 91 | 92 | ```re 93 | ^[\p{Ll}\p{Lt}\p{Lm}\p{Lo}][\p{Ll}\p{Lt}\p{Lm}\p{Lo}\d\-.]*$ 94 | ``` 95 | 96 | ## Renaming existing data 97 | 98 | All API resources can already be renamed manually: 99 | 100 | * Pipelines can be renamed with `fly rename-pipeline`. 101 | * Teams can be renamed with `fly rename-team`. 102 | * Jobs can be renamed by updating their `name` and specifying the old name as 103 | [`old_name`][jobs-old-name]. This will preserve the job's build history. 104 | * Resources can be renamed in the same fashion by setting 105 | [`old_name`][resources-old-name]. This will preserve the resource's state 106 | (i.e. disabled versions, pinning). 107 | * Step names can be renamed without any migration necessary. 108 | 109 | ## Easing the transition 110 | 111 | Enforcing rules about identifiers is easy. Doing this in a way that doesn't 112 | alienate existing users and their practices is the hard part. 113 | 114 | Requiring users to perform these manual steps in order to upgrade or 115 | immediately after upgrading would likely slow down adoption. Being unable to 116 | interact with mission-critical pipelines that have now-invalid identifiers 117 | would be a major problem. Users should not be punished for upgrading. 118 | 119 | To ease this pain, we can allow existing data to stay as-is, and only enforce 120 | the identifier rules for newly created teams and pipelines. Additionally, 121 | these validations can be implemented as warnings for a long period of time so 122 | that users have time to adapt. 123 | 124 | Existing data will still be fully functional and writable (i.e. updated with 125 | `fly set-pipeline`, `fly set-team`), and Concourse can emit warnings for any 126 | invalid identifiers (including the pipeline/team name itself) instead of 127 | erroring out completely. 128 | 129 | After one year, we can turn these warnings into errors. 130 | 131 | 132 | # Open Questions 133 | 134 | n/a 135 | 136 | 137 | # Answered Questions 138 | 139 | * **What about pipelines that use symbols as delimiters?** 140 | 141 | A common practice today is to configure many pipelines with a specific naming 142 | scheme, e.g.: 143 | 144 | * `dependency:github.com-username-repo-1` 145 | * `dependency:github.com-username-repo-2` 146 | 147 | Rather than cramming data into the pipeline name (and having to sanitize it), 148 | this should probably be resolved through use of [instanced 149 | pipelines](https://github.com/concourse/rfcs/pull/34). 150 | 151 | This would immediately improve the UX: pipeline names will be much shorter, 152 | related pipelines will be grouped together, the values no longer have to be 153 | sanitized, and the pipeline name is now just `dependency`, conforming to this 154 | RFC. 155 | 156 | * **Are there any users who would become 'blocked' by this change?** 157 | 158 | Aside from strong personal preference, are there any Concourse users that would 159 | be unable to upgrade given the new rules? 160 | 161 | To put it another way: imagining Concourse had this strict naming convention 162 | from the get-go, are there any users who would *not be able to use Concourse* 163 | as a result? 164 | 165 | *(No one came forward with this RFC open for almost a year, so I guess that 166 | answers that.)* 167 | 168 | 169 | # New Implications 170 | 171 | * n/a 172 | 173 | 174 | # Revisions 175 | 176 | ## Revision 1.1 177 | 178 | In response to feedback in [concourse/concourse#6070][underscores-issue] this 179 | RFC has been amended to allow the use of the underscore character (`_`) in 180 | identifiers. 181 | 182 | 183 | [var-sources-rfc]: https://github.com/concourse/rfcs/pull/39 184 | [underscores-issue]: https://github.com/concourse/concourse/issues/6070 185 | [jobs-old-name]: https://concourse-ci.org/jobs.html#schema.job.old_name 186 | [resources-old-name]: https://concourse-ci.org/resources.html#schema.resource.old_name -------------------------------------------------------------------------------- /041-opa-integration/proposal.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | This proposal outlines the beginnings of support for policy enforcement. OPA 4 | (Open Policy Agent) will be used as the first policy manager to integrate with. 5 | 6 | 7 | # Motivation 8 | 9 | Generally speaking, anything that exposes an HTTP API (whether an individual 10 | micro-service or an application as a whole) needs to control who can run those 11 | APIs and when. 12 | 13 | Concourse, as a CI platform, also needs to control who can do something and when, 14 | which can be done by integrating with a policy manager. Before any action, send 15 | a policy check request, with data that describes the action, to the policy manager, 16 | and only continue if the policy manager replies a "pass". 17 | 18 | Possible policies to apply could be: 19 | 20 | * Steps, e.g. code security scan, by certain resource types must be done. 21 | * Some team's tasks must be run with certain tag. 22 | * Some docker image registries are not allowed. 23 | * Some docker images are not not allowed. 24 | * and so on ... 25 | 26 | 27 | # Proposal 28 | 29 | ## Support multiple policy managers 30 | 31 | Like how multiple credential managers are supported, Concourse should allow other 32 | policy managers than OPA. Thus an interface of `PolicyCheck` should be defined, and 33 | OPA is one of the implementations. 34 | 35 | ## Policy check points 36 | 37 | * All API calls, for example `set-pipeline`, will also go through policy checks. If 38 | the check doesn't pass, API should return HTTP code 403 (forbidden). 39 | * `UseImage` action will be sent to OPA before Concourse launches a container in 40 | `check/get/put/task` steps. If the check doesn't pass, the step should return an error 41 | indicating policy check not pass. 42 | 43 | ## OPA configuration 44 | 45 | * CONCOURSE_OPA_URL - URL of OPA service including path 46 | 47 | For example, OPA_URL is `http://opa.mycompany.com/v1/data/concourse/policy`, a OPA 48 | policy check request will look like: 49 | 50 | ``` 51 | POST http://opa.mycompany.com/v1/data/concourse/policy 52 | { 53 | "input": { 54 | ... 55 | } 56 | } 57 | ``` 58 | 59 | ## Policy check input data 60 | 61 | Policy check input data should include common metadata and action specific data. 62 | In principle, Concourse should send as much data as possible to the policy engine. 63 | 64 | Common metadata include: 65 | 66 | * `service`: should be always "concourse". This will allow Concourse specific 67 | policies to to configured in the policy engine. 68 | * `cluster_name`: cluster name. 69 | * `cluster_version`: Concourse version 70 | * `action`: Action name. This follows same action names used in RBAC, plus an extra 71 | action `UseImage`. 72 | * `http_method`: HTTP method of the action, for `UseImage`, this field absents. 73 | * `user`: username who invokes the action. 74 | * `team`: team name. 75 | * `pipeline`: pipeline name. Some action is not against a pipeline, then this field 76 | can be omitted. 77 | * `data`: For API actions, `data` should carry data from the API call; for `UseImage` 78 | action, data is the image configuration. 79 | 80 | For example, a policy check request against `set_pipeline` looks like: 81 | 82 | ```json 83 | { 84 | "input": { 85 | "service": "concourse", 86 | "cluster_name": "some-cluster", 87 | "cluster_version": "5.7.0", 88 | "user": "some-user", 89 | "team": "some-team", 90 | "pipeline": "some-pipeline", 91 | "action": "SetPipeline", 92 | "data": { 93 | "groups": [ ], 94 | "resource_types": [ ], 95 | "resources": [ ], 96 | "jobs": [ ] 97 | } 98 | } 99 | } 100 | ``` 101 | 102 | A policy check request of action `UseImage` looks like: 103 | 104 | ```json 105 | { 106 | "input": { 107 | "service": "concourse", 108 | "cluster_name": "some-cluster", 109 | "cluster_version": "5.7.0", 110 | "team": "some-team", 111 | "pipeline": "some-pipeline", 112 | "action": "UseImage", 113 | "data": { 114 | "image_type": "registry-image", 115 | "image_source": { 116 | "repository": "busybox", 117 | "tag": "latest", 118 | "username": "someone", 119 | "password": "(redacted)" 120 | } 121 | } 122 | } 123 | } 124 | ``` 125 | 126 | _NOTE: any secret appearing in `image_source` that is fetched from credential 127 | manager or var_sources should be redacted._ 128 | 129 | 130 | ## Policy check switches 131 | 132 | If no policy manager is configured, then policy check is switched off. 133 | 134 | When a policy manager is configured, Concourse intends to use "explicit" strategy 135 | to decide which actions should run policy checks, meaning that, without defining 136 | policy check filters, no action will run policy check. 137 | 138 | Users may not want to run policy check against all actions. For example, Concourse 139 | generate a large amount of `ListAllPipelines`, and it makes not much sense to check 140 | it. 141 | 142 | Users will tend to check with the policy manager for write actions rather than 143 | read-only actions. As all actions except `UseImage` are invoked from HTTP, we 144 | can provide a filter, `policy-check-filter-http-method`, to specify HTTP 145 | methods via which actions are invoked. To skip read-only action for policy 146 | check, users may set `POST,PUT,DELETE` to the filter `policy-check-filter-http- 147 | method`, so that `GET` actions will not go through policy check. 148 | 149 | User also may specifically want to or don't want to do run policy check against 150 | certain actions. For example, a cluster will want to check policy against only 151 | `SetPipeline`, or the other cluster don't want to check policy against `UseImage`, 152 | for which two more filters, `policy-check-filter-action` and `policy-check- 153 | filter-action-skip` are supported. If an action is defined in action list, then 154 | the action will always go through policy check, vice versa for action-skip list. 155 | 156 | In summary, where are three policy check filters: 157 | 158 | * `policy-check-filter-http-method`: defines HTTP methods of actions that 159 | should go through policy checks. Default to empty list. 160 | * `policy-check-filter-action`: defines action names that should always go 161 | through policy checks. Default to empty list. 162 | * `policy-check-filter-action-skip`: defines action names that should never 163 | go through policy checks. Default to empty list. 164 | 165 | 166 | # Open Questions 167 | 168 | * What else policy points? 169 | 170 | * What else policy engines than OPA folks are using? 171 | 172 | * Does it make sense to share Audit's switches to control policy checks? 173 | 174 | 175 | 176 | # Answered Questions 177 | 178 | -------------------------------------------------------------------------------- /082-p2p-volume-streaming/proposal.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Introduces P2P Volume Streaming, an enhancement of volume streaming that will 4 | benefit clusters where workers are reachable from each other. 5 | 6 | # Motivation 7 | 8 | When Concourse launches a container on a worker, if some required volumes locate 9 | on other workers, then Concourse will automatically copy the volumes to the worker. 10 | This process is called volume streaming. 11 | 12 | With the current Concourse implementation, volume streaming is done via ATC, in 13 | other words, volume data is transferred from source worker to ATC first, then 14 | ATC transfers the data to dest worker. The design fits deployments where workers 15 | don't see each other. 16 | 17 | However, for deployments where workers can see each other, volume streaming can be 18 | optimized by streaming volumes from a worker directly to the other. [A study](https://ops.tips/notes/concourse-workers-streaming-improvements/) 19 | has been done by [Ciro S. Costa](https://github.com/cirocosta) that shows how P2P volume 20 | streaming benefits. 21 | 22 | 23 | # Proposal 24 | 25 | ## Workflow 26 | 27 | The current volume streaming worker flow is: 28 | 29 | ``` 30 | source-baggageclaim ATC dest-baggageclaim 31 | | | | 32 | | PUT stream-out | | 33 | | <----------------- | | 34 | | PUT stream-in | 35 | | -----------------> | 36 | ``` 37 | 38 | The URL that ATC uses to access worker (baggageclaim) is not visible by workers. Thus 39 | to allow source worker to access dest worker directly, ATC needs to ask dest worker for 40 | its public IP. So we need to add a bc API `p2p-url`, so that ATC can get dest worker's 41 | baggageclaim url, then ATC invokes a new bc API `stream-p2p-out` to source worker, then 42 | source worker calls dest worker's bc API `stream-in` to ship volumes to dest worker. 43 | 44 | ``` 45 | source-baggageclaim ATC dest-baggageclaim 46 | | | | 47 | | | GET p2p-url | 48 | | | -----------------> | 49 | | PUT stream-p2p-out | | 50 | | <----------------- | | 51 | | | 52 | | PUT stream-in | 53 | | --------------------------------------> | 54 | ``` 55 | 56 | ## Baggage-claim API changes 57 | 58 | ### 1. Add `GET /p2p-url` 59 | 60 | This API takes no parameter and returns a HTTP URL that should be accessible from other 61 | workers. 62 | 63 | ### 2. Add `PUT /stream-p2p-out?streamInURL=&encoding=&path=` 64 | 65 | This API guides source worker to stream a volume directly to dest worker. It takes three 66 | parameters: 67 | 68 | * `streamInURL`: dest baggage-claim's volume stream-in url 69 | * `encoding`: data compression method, `gzip` or `zstd` 70 | * `path`: will be passed to `stream-in` 71 | 72 | ## Worker CLI option changes 73 | 74 | * `p2p-interface-name-pattern`: using this pattern to find a network interface and use its IP address. 75 | * `p2p-interface-family`: 4 or 6, meaning use IPv4 or IPv6 address. 76 | 77 | ## Web CLI option changes 78 | 79 | A new cli flag `--enable-p2p-volume-streaming` should be added to opt-in the feature. 80 | By default, volume streaming will still go through ATC. 81 | 82 | 83 | # Open Questions 84 | 85 | 1. About `baggageclaim-bind-ip`. To allow both ATC and other workers to connect, `baggageclaim-bind-ip` 86 | has to be set to `0.0.0.0`. If workers are behind a firewall, which should not be a problem. However, if 87 | workers are on public cloud, is that a security concern? If yes, then we may consider using dedicate 88 | bind-ip+port for p2p streaming. 89 | 90 | 2. As volume streaming is initiated from source worker, source workers knows size of the volume to stream 91 | out, the other optimization is that, when streaming small files, like small text/json/yaml files, and so on, 92 | transferring raw files might be cheaper than shell-executing `tar` command to compress the files then transferring. 93 | 94 | 3. The current RFC assumes all workers are on the same network. A future iteration could have workers 95 | specify their network somehow - maybe just a name to indicate which workers live in the same network. 96 | 97 | # Answered Questions 98 | 99 | 100 | # New Implications 101 | 102 | This feature assumes all workers are on the same network, in other words, all workers can reach to each 103 | other directly. Don't enable p2p volume streaming if the assumption is not true, otherwise volume streaming 104 | may fail, which in turn leads to build failures. 105 | -------------------------------------------------------------------------------- /139-pipeline-identity-tokens/img/AWS-IDP.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/concourse/rfcs/cfa7597ff2a8a61bc7015b8791e8321cab83e040/139-pipeline-identity-tokens/img/AWS-IDP.png -------------------------------------------------------------------------------- /139-pipeline-identity-tokens/proposal.md: -------------------------------------------------------------------------------- 1 | * RFC PR: [concourse/rfcs#139](https://github.com/concourse/rfcs/pull/139) 2 | 3 | # Summary 4 | 5 | Pipelines should receive signed JWTs ([RFC7519](https://datatracker.ietf.org/doc/html/rfc7519)) from Concourse that contain information about them (team, pipeline-name etc.). 6 | They could then send these JWTs to outside services to authenticate using their identity as "Concourse-Pipeline X" 7 | 8 | 9 | # Motivation 10 | Often pipelines have to interact with outside services to do stuff. For example download sources, upload artifacts or deploy something to the Cloud. 11 | As of now you would need to create static credentials for these outside services and place them into concourse's secrets-management (for example inside vault). Or attach some kind of trust to the concourse-workers (and therefore to ALL pipelines running there). 12 | 13 | However having static (long lived) credentials for something that is critical (like a prod account on AWS) is not state of the art for authentication. 14 | And attaching trust to workers means you would need multiple workers with different trust configured, if not all the pipelines share the same "trustworthiness". 15 | 16 | It would be much better if code running in a pipeline could somehow prove it's identity to the outside service. The outside service could then be configured to grant permissions to a specific pipeline (or job or task). 17 | 18 | Lots if other services already implement something like this. One well known example of this are [Kubernetes's Service Accounts](https://kubernetes.io/docs/concepts/security/service-accounts/#authenticating-credentials). Kubernetes mounts a signed JWT into the pod and the pod can then use this token to authenticate with Kubernetes itself or with any other service that has a trust-relationship with the Kubernetes-Cluster. 19 | 20 | ## Usage with AWS 21 | For example a Pipeline could use AWS's [AssumeRoleWithWebIdentity API-Call](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) to authenticate with AWS using it's concourse-token and do stuff in AWS. It is even [directly supported by the AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html) 22 | 23 | 1. Create an OIDC-Identity-Provider for your Concourse Server in the AWS Account you would like to use. Like [this](img/AWS-IDP.png). 24 | 2. Create an AWS.IAM-Role with the required deployment-permissions and the following trust policy: 25 | ```json 26 | { 27 | "Version": "2012-10-17", 28 | "Statement": [ 29 | { 30 | "Effect": "Allow", 31 | "Action": "sts:AssumeRoleWithWebIdentity", 32 | "Principal": { 33 | "Federated": "" 34 | }, 35 | "Condition": { 36 | "StringEquals": { 37 | ":sub": [ 38 | "main/deploy-to-aws" 39 | ], 40 | ":aud": [ 41 | "sts.amazonaws.com" 42 | ] 43 | } 44 | } 45 | } 46 | ] 47 | } 48 | ``` 49 | This trust-policy allows everyone to assume this role via the AssumeRoleWithWebIdentity API-Call, as long as he has a JWT, signed by your Concourse, with the sub-value of "main/deploy-to-aws". 50 | 51 | And conveniently Concourse will create exactly such a token and supply it to (and only to) the pipeline "deploy-to-aws" in the "main" team. 52 | 53 | When code inside a pipeline performs the AssumeRoleWithWebIdentity API-Call, AWS will check the provided token for expiry, query concourse to obtain the correct signature-verification key and use it to check the JWT's signature. It will then compare the aud and sub claims of the token with the ones specified in the Role's trust policy. If everything checks out, AWS will return temporary AWS-Credentials that the pipeline can then use to perform actions in AWS. 54 | 55 | In a concourse pipeline all of this could then look like this: 56 | ```yaml 57 | - task: get-image-tag 58 | image: base-image 59 | config: 60 | platform: linux 61 | run: 62 | path: bash 63 | dir: idp-servicebroker 64 | args: 65 | - -ceux 66 | - aws sts assume-role-with-web-identity --d 67 | --provider-id "" \ 68 | --role-arn "" \ 69 | --web-identity-token ((idtoken:token)) 70 | - // do stuff with the new AWS-Permissions 71 | ``` 72 | 73 | 74 | ## Usage with vault 75 | The feature would also allow pipelines to authenticate with vault. This way a pipeline could directly access vault and use all of it's features and not only the limited stuff that is concourse provides natively. 76 | 77 | Vault has support for [authentication via JWT](https://developer.hashicorp.com/vault/docs/auth/jwt). 78 | It works similarly to AWS. You tell Vault an URL to the Issuer of the JWT (your concourse instance) and configure what values you expect in the token (for example the token must be issued to a pipeline of the main team). You can then configure a Vault-ACL and even use claims from the token in the ACL. Your ACL could for example allow access to secrets stored in `/concourse//` to any holder of such a JWT issued by your concourse. 79 | 80 | Detailed usage-instructions for vault can follow if required. 81 | 82 | # Proposal 83 | Implementation is split into different phases that stack onto each other. We could implement the first few and expand the implementation step by step. 84 | 85 | ## Phase 1 86 | - When Concourse boots for the first time it creates a signature key-pair and stores it into the DB. For now we generate a 4096 bit RSA-Key so we can use the RS256 signing method for the tokens. This seems to be the signing method with most support and is used by others for similar purposes ( https://token.actions.githubusercontent.com/.well-known/jwks , https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ ). Other key types like EC256 can be added later and would be selectable via the var-source. 87 | - Concourse exposes the public part of the key as a JWKS ([RFC 7517](https://datatracker.ietf.org/doc/html/rfc7517)) under a publicly accessible path (for example: https://myconcourse.example.com/keys) 88 | - Concourse offers a minimal OIDC Discovery Endpoint ([RFC8418](https://datatracker.ietf.org/doc/html/rfc8414)) that basically just points to the JWKS-URL 89 | - There is a built-in var source (see section below) that pipelines can use to get a signed JWT with the following contents: 90 | ```json 91 | { 92 | "iss": "https://myconcourse.example.com", 93 | "exp": "expiration-time", 94 | "iat": "time-when-token-was-issued", 95 | "jti": "nonce", 96 | "aud": [""], 97 | "sub": "team/pipeline-name", 98 | "team": "team-name", 99 | "pipeline": "pipeline-name", 100 | "job": "job-name", 101 | "step": "step-name" 102 | } 103 | ``` 104 | - That JWT is signed with the signature key created in the beginning 105 | - The jobs/steps of the pipeline use the token to do whatever they like with it 106 | - The sub-claim's value is by default of form `/` (but can be configured, see below) 107 | - Tokens can have an optional aud-claim that is configurable via the var-source (see below) 108 | - Tokens do NOT contain worker-specific information 109 | - If implementable with reasonable effort: The token should contain the job and step name 110 | 111 | ### The IDToken Var-Source 112 | The var-source of type "idtoken" can be used to obtain the tokens described above. It offers a few config-fields to configure the token that is received: 113 | 114 | - `subject_scope` string - Specifies what should be included in the sub-claim of the token. The var-source MUST make sure that no component of the sub-claim contains any forward-slashes (`/`) and should escape all slashes by URL-Encoding them. 115 | - with a value of one of: 116 | - `team` which results in a `sub` claim of `` 117 | - `pipeline` which results in a `sub` claim of `/` 118 | - `job` which results in a `sub` claim of `//` 119 | - `step` which results in a `sub` claim of `///` 120 | - default: `pipeline` 121 | 122 | - `audience` []string - The aud-claims to include in the token. 123 | - default: `nil` 124 | 125 | - `expires_in` `time.Duration` - How long the generated token should be valid. 126 | - default: `1h` 127 | - Max value accepted is `24h` 128 | 129 | The output variable of the var-source that contains the token is called `token`. All other variables are reserved for future use. 130 | 131 | In the future it would be possible to add a `signature_algorithm` config field that allows the user to choose between RS256 and ES256 as signature-algorithms for his token. (Concourse would need to have one key for each supported algorithm stored). 132 | 133 | In the pipeline it would then look like this (all config fields are optional and are shown here for clarity): 134 | 135 | ```yaml 136 | var_sources: 137 | - name: idtoken 138 | type: idtoken 139 | config: 140 | subject_scope: pipeline 141 | audience: ["sts.amazonaws.com"] 142 | expires_in: 1h 143 | 144 | jobs: 145 | - name: print-credentials 146 | plan: 147 | - task: print 148 | config: 149 | platform: linux 150 | image_resource: 151 | type: registry-image 152 | source: {repository: ubuntu} 153 | params: 154 | ID_TOKEN: ((idtoken:token)) 155 | run: 156 | path: bash 157 | args: 158 | - -c 159 | - | 160 | echo "token: $ID_TOKEN" 161 | // or 162 | echo "token: ((idtoken:token))" 163 | // Send this token as part of an API-Request to an external service 164 | ``` 165 | 166 | ## Phase 2 167 | Concourse could periodically rotate the signing key it uses. Default rotation-period will be 7 days. The new key will then also be published in the JWKS and will be used to sign tokens from then on. The previous key MUST also remain published for 24h (which is the absolute maximum lifetime of a token), in case there are still unexpired tokens out there that were signed with it. 168 | 169 | The rotation-period should be configurable as an atc-setting. Setting the period to 0 effectively disables automatic key-rotation. 170 | 171 | ## Phase 3 172 | To make sure tokens are as short-lived as possible we could enable online-verification of tokens. Concourse could offer a Token-Introspection-Endpoint ([RFC7662](https://datatracker.ietf.org/doc/html/rfc7662)) where external services can send tokens to for verification. 173 | That endpoint could reject any token that was issued for a pipeline/job/task that has already finished running. 174 | 175 | # Open Questions 176 | 177 | # Answered Questions 178 | 179 | > 1. How to call the var_source. "idtoken"? 180 | 181 | We're calling it `idtoken`. 182 | 183 | > 2. How to call the "field" for the token in the var_source. "token"? 184 | 185 | We're calling it `token`. 186 | 187 | > 3. What kind of keypair is used by default? RSA-Keys with 4096 bits? We could generate multiple types of keypairs and allow pipelines to choose one via the var-source-config. 188 | 189 | We've settled on generating RSA256 keys for the first implementation. Future implementations may add other types of keypairs like ES256. 190 | 191 | > 4. What exactly to put into the sub-claim? The easiest would be "team/pipeline". But what about the job-name or instance-vars? Unfortunately instance-vars and job-names are currently not available to var-sources. 192 | 193 | We've decided to provide a set list of values that users can choose from: `team`, `pipeline`, `job`, and `step`. These result in predetermined strings being set as the value of the `sub` claim. See above for details. 194 | 195 | > 5. How long should the default ttl be? 196 | 197 | We've settled on 1hr as it seems to be a common default and should be good enough to cover most use-cases. 198 | 199 | > 6. How often to rotate the signing key? 200 | 201 | We've settled on rotating the key, by default, every 7 days. The key will remain in the JWKS for 24hrs after it expires so any keys generated right before aren't immediately invalid. This will be configurable and users can set the rotation to zero days which will completely disable rotation. 202 | 203 | > 7. Can and should we include more specific information into the token (job-name/id, task, infos about the worker)? 204 | 205 | We want to provide job and step names, if possible. No info about the workers will be included in the JWT. 206 | 207 | > 8. How do pipeline identity tokens work with resources? 208 | 209 | It's not clear how well this will work with resources since a constantly rotating token will make version history for a resource be constantly reset. This may motivate us to provide some way to avoid that, but that should be investigated separately from this RFC. 210 | 211 | # New Implications 212 | 213 | This could fundamentally change the way how pipelines interact with external services, making it much more secure. 214 | As JWT-Authentication is a modern standard that is supported by lots of services, it could enable a whole bunch of new usecases. 215 | Use of this feature is entirely optional. Everyone who doesn't need it can completely ignore it. 216 | -------------------------------------------------------------------------------- /DESIGN_PRINCIPLES.md: -------------------------------------------------------------------------------- 1 | # Concourse Design Principles 2 | 3 | Concourse's goal is to solve automation, once and for all, without becoming part of the problem. 4 | 5 | 6 | ## Expressive by being precise 7 | 8 | Concourse should provide concepts that build a strong mental model for the user's project, and this model should remain intuitive as the user's automation requirements grow. 9 | 10 | Concepts should precisely outline their motivation and intended workflows. Friction and complexity resulting from the imprecise application of a concept should be a cue to introduce new ideas. ([Example](https://blog.concourse-ci.org/reinventing-resource-types/)) 11 | 12 | 13 | ## Versatile by being universal 14 | 15 | Concourse should be able to do a lot with a little. New concepts should only be introduced if their intended workflow cannot be precisely expressed in terms of existing concepts. 16 | 17 | Concepts should not be highly specialized for one domain or introduce tight coupling to specific technologies. Users should be able to relate to every concept, and their automation should be insulated from the constant churn of the tech industry. 18 | 19 | 20 | ## Safe by being destructible 21 | 22 | Concourse should prevent [anti-patterns](https://github.com/concourse/concourse/wiki/Anti-Patterns) and the accumulation of technical debt. Concourse's concepts should make good practices feel intuitive and bad practices feel uncomfortable. 23 | 24 | Automation should be self-contained and reproducible in order to maintain business continuity when recovering from disaster scenarios (e.g. total cluster loss). Concourse should merely be a choreographer of mission-critical state kept in an external source of truth, allowing individual installations to be ephemeral. 25 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Concourse RFCs 2 | 3 | The Concourse project uses the RFC (request for comments) process for 4 | collaborating on substantial changes to Concourse. RFCs enable contributors to 5 | collaborate during the design process, providing clarity and validation before 6 | jumping to implementation. 7 | 8 | 9 | ## When the RFC process is necessary 10 | 11 | The RFC process is necessary for changes which have a substantial impact on end 12 | users, operators, or contributors. "Substantial" is subjective, but it 13 | generally includes: 14 | 15 | * Changes to core workflow functionality (pipelines, tasks, new concepts). 16 | * Changes to how Concourse is packaged, distributed, or configured. 17 | * Changes with significant architectural implications (new runtimes, auth 18 | schemes, etc.). 19 | * Changes which modify or introduce officially supported interfaces (HTTP APIs, external integrations, etc). 20 | 21 | An RFC is not necessary for changes which have narrow scope and don't leave 22 | much to be discussed: 23 | 24 | * Bug fixes and optimizations with no semantic change. 25 | * Small features which only impact a narrow use case and affect users in an 26 | obvious way. 27 | 28 | The RFC process aims to prevent wasted time and effort on substantial changes 29 | that end up being sent back to the drawing board. If your change takes minimal 30 | effort, or if you don't mind potentially scrapping it and starting over, feel 31 | free to skip this process. Do note however that pull requests may be closed 32 | with a polite request to submit an RFC. 33 | 34 | If you're not sure whether to open an RFC for a change you'd like to propose, 35 | feel free to [ask in `#dev`](https://discord.gg/MeRxXKW)! 36 | 37 | 38 | ## Submitting an RFC 39 | 40 | 1. Fork this repository. 41 | 42 | 1. Copy the `000-example` RFC template, naming it something like 43 | `000-my-proposal`. 44 | 45 | 1. Write your proposal in `000-my-proposal/proposal.md`. 46 | 47 | * Consult the [Concourse design principles](DESIGN_PRINCIPLES.md) to guide 48 | your design. 49 | 50 | * Include any dependent assets (examples, screenshots) under your RFC 51 | directory. 52 | 53 | 1. Submit a pull request. The pull request number determines the RFC number. 54 | 55 | * Keep the description light; your proposal should contain all relevant 56 | information. Feel free to link to any relevant GitHub issues, since that 57 | helps with networking. 58 | 59 | 1. Rename the proposal directory to match the pull request number, e.g. 60 | `123-my-proposal`. 61 | 62 | For convenience, update the PR description to link to the rendered proposal 63 | in the pull request body like so: 64 | 65 | ``` 66 | [Rendered](https://github.com/{YOUR NAME}/rfcs/blob/{YOUR BRANCH}/123-my-proposal/proposal.md) 67 | ``` 68 | 69 | 1. Feel free to review your own RFC and leave comments and questions as you 70 | reason about the problem and reach key decisions. Doing so helps build a 71 | public record of the decision-making process. 72 | 73 | 1. The RFC will be assigned to a member of the [**core** team][core-team]. The 74 | assignee is responsible for providing feedback and eventually shepherding 75 | the RFC through the [resolution process](#resolution). Reach out to your 76 | RFC's assignee if you need any help with the RFC process. 77 | 78 | 1. Collect user feedback and votes (GitHub reactions) for your own RFC by 79 | linking to it in issues or contexts where it is relevant. Please be 80 | respectful of other RFC authors and avoid vote brigading; diversity of 81 | perspective is more important than having the most votes. 82 | 83 | The [Concourse website](https://concourse-ci.org) lists open RFCs ranked by 84 | GitHub reactions in order to increase exposure to end users. The goal of 85 | ranking them is to focus attention on the RFCs most relevant to the 86 | community, increasing clarity through user feedback and accelerating them to 87 | resolution. 88 | 89 | 1. Amend your proposal in response to feedback by pushing more commits to your 90 | fork. Whenever possible, please make meaningful commits that summarize the 91 | changes and reasoning (rather than rebasing and force-pushing all the time). 92 | 93 | 94 | ## Reviewing RFCs 95 | 96 | Concourse users and contributors are encouraged to review RFCs alongside 97 | members of the core team. Feedback from diverse perspectives is necessary for 98 | determining a proposal's efficacy, impact, and priority. Reviewing RFCs is also 99 | great practice for [joining the core team][joining-a-team] someday! 100 | 101 | Reviewers should focus on resolving open questions, surfacing risks and 102 | drawbacks, and providing constructive critique of the overall approach. The 103 | [Concourse design principles](DESIGN_PRINCIPLES.md) serve as a guiding hand to 104 | determine the proposal's alignment with the Concourse philosophy. 105 | 106 | Reviewers should leave questions and comments on individual lines via PR review 107 | so that discussions may be threaded and marked as resolved. Leaving GitHub 108 | reactions also helps to measure consensus without cluttering the comment thread 109 | if you don't have much more to add. 110 | 111 | 112 | ### Resolution 113 | 114 | The review process should lead to consensus from three different perspectives: 115 | 116 | * Members of the **core** team have determined whether the proposal fits with 117 | the Concourse design principles and whether the changes sufficiently improve 118 | the product. 119 | * The **maintainers** have determined whether the proposal is worth 120 | maintaining, i.e. whether the benefits of the proposal outweigh any technical 121 | tradeoffs, or if it introduces an unsustainable maintenance burden. 122 | * Enough community input has been provided to validate the need and efficacy of 123 | the proposal. 124 | 125 | Once the review status stabilizes and clarity has been reached, the core team 126 | assignee will grant the RFC one of the following labels: 127 | 128 | * **resolution/merge**: the proposal will be merged; there are no outstanding 129 | objections, and implementation can begin as soon as the RFC is merged. 130 | * **resolution/close**: the proposal will be closed. 131 | * **resolution/postpone**: resolution will be deferred until a later time when 132 | the motivating factors may have changed. 133 | 134 | These labels initiate a two-week quiet period, and any final feedback will be 135 | sought by bumping the RFC to the top of the RFC table on the Concourse website. 136 | No further changes should be made to the proposal during this period. 137 | 138 | If there is a challenge to the resolution during the quiet period the label may 139 | be removed at the discretion of the assignee, and the RFC process will continue 140 | as before. 141 | 142 | 143 | ## Implementing an RFC 144 | 145 | When an RFC is merged the core team assignee is responsible for opening an 146 | issue on the [Concourse repository](https://github.com/concourse/concourse) to 147 | keep track of its implementation. The issue can be lightweight and just 148 | reference the RFC. The assignee must also add a link to the issue at the top of 149 | the RFC's proposal document. 150 | 151 | The [**maintainers** team][maintainers-team] is responsible for determining the 152 | proposal's priority by adding a **priority/high**, **priority/medium**, or 153 | **priority/low** label to the RFC's issue. Priority is an approximation of 154 | overall value and desired timeline for implementation. 155 | 156 | An RFC author is not necessarily responsible for its implementation, though 157 | they may volunteer. If the maintainers have sufficient bandwidth they may place 158 | it on their roadmap by prioritizing the issue in a GitHub project. Otherwise 159 | the maintainers will add a **help wanted** label to the issue. 160 | 161 | In any case, contributors may volunteer to implement a proposal provided that 162 | work has not already begun. If you would like to volunteer, please leave a 163 | comment on the issue to let others know! 164 | 165 | From there, the implementation process falls under the normal [Concourse 166 | development process][contributing]. 167 | 168 | 169 | ## Revising an RFC 170 | 171 | RFCs represent the planning phase. An RFC's proposal is not the source of truth 172 | for the feature's documentation, and should not be revised to keep up with 173 | later iterations after the initial proposal is implemented. A new RFC should be 174 | proposed for subsequent changes instead. 175 | 176 | If an RFC is merged and later changes are deemed necessary prior to final (i.e. 177 | non-experimental) implementation, a follow-up PR may be submitted that updates 178 | the proposal in-place. In this case the RFC author must include a MAJOR.MINOR 179 | revision number in the proposal and maintain a brief summary of changes at the 180 | bottom of the proposal. 181 | 182 | 183 | ## License 184 | 185 | All RFCs, and any accompanying code and example content, will fall under the 186 | Apache v2 license present at the root of this repository. 187 | 188 | 189 | [joining-a-team]: https://github.com/concourse/governance#joining-a-team 190 | [core-team]: https://github.com/concourse/governance/blob/master/teams/core.yml 191 | [maintainers-team]: https://github.com/concourse/governance/blob/master/teams/maintainers.yml 192 | [contributing]: https://github.com/concourse/concourse/blob/master/CONTRIBUTING.md 193 | --------------------------------------------------------------------------------