├── .github
├── CONTRIBUTING.md
└── SECURITY.md
├── .gitignore
├── GOVERNANCE.md
├── LICENSE
├── MAINTAINERS.md
├── README.md
├── guidance
├── background
│ ├── response
│ │ ├── applying-mitigations.md
│ │ ├── assessments.md
│ │ ├── detection-and-tracing.md
│ │ └── recovery-and-prevention.md
│ ├── security-basics.md
│ ├── threat-modeling-101.md
│ └── threat-modeling
│ │ ├── actions.md
│ │ ├── actors.md
│ │ ├── attack-graphs-technique.md
│ │ ├── comprehensive-coverage.md
│ │ ├── dread-technique.md
│ │ ├── goals.md
│ │ └── understanding-risk.md
├── getting-started.md
├── level-1
│ ├── appendix.md
│ ├── creating-document.md
│ ├── development-and-support.md
│ ├── getting-started-self-assessment.md
│ ├── header.md
│ ├── metadata.md
│ ├── project-overview.md
│ └── system-design.md
├── level-2
│ ├── getting-started-joint-assessment.md
│ └── roles
│ │ ├── lead.md
│ │ ├── maintainer.md
│ │ └── reviewer.md
└── level-3
│ ├── getting-started-conformity-assessment.md
│ └── regulatory-considerations.md
└── templates
├── conformity-assessment.md
├── joint-assessment.md
└── self-assessment.md
/.github/CONTRIBUTING.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/.github/CONTRIBUTING.md
--------------------------------------------------------------------------------
/.github/SECURITY.md:
--------------------------------------------------------------------------------
1 | # Security
2 |
3 | Per the [Linux Foundation Vulnerability Disclosure Policy](https://www.linuxfoundation.org/security),
4 | if you find a vulnerability in a project maintained by the Open Source Security Foundation (OpenSSF),
5 | please report that directly to the project maintaining that code, preferably using
6 | GitHub's [Private Vulnerability Reporting](https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability#privately-reporting-a-security-vulnerability).
7 |
8 | If you've been unable to find a way to report it, or have received no response after repeated attempts,
9 | please contact the OpenSSF security contact email, [security@openssf.org](mailto:security@openssf.org).
10 |
11 | Thank you.
12 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 |
--------------------------------------------------------------------------------
/GOVERNANCE.md:
--------------------------------------------------------------------------------
1 | # OSPS Assessments Project Governance
2 |
3 | As a developing project, OSPS Assessments aims to have a quick development cycle where decisions and community issues are resolved promptly while capturing the input of interested stakeholders.
4 |
5 | OSPS Assessments has no formal collegiate body in charge of steering. **Decisions are guided by the consensus of community members who have achieved maintainer status.**
6 |
7 | While maintainer consensus shall be the process for decision making, all issues and proposals shall be governed by the project's [guiding principles].
8 |
9 | ## OpenSSF Assessments Special Interest Group (SIG)
10 |
11 | The Open Source Project Security Assessments (OSPS Assessments) is produced by the OpenSSF's Assessments SIG, part of the [BEST Working Group](https://github.com/ossf/wg-best-practices-os-developers).
12 |
13 | Refer to the [OpenSSF Community Calendar](https://openssf.org/getinvolved/) for SIG meeting times, meeting notes, and links to past recordings.
14 |
15 | - **SIG Lead:** Eddie Knight (@eddie-knight)
16 |
17 | ## Guiding Governance Principles
18 |
19 | Any issues or proposals brought to the project's maintainers shall be framed in the OSPS Assessments guiding principles. Proposals not adhering to said principles shall not be considered for consensus.
20 |
21 | ### Favor Simplicity
22 |
23 | The goal of OSPS Assessments is to create a minimal and efficient standard that can be quickly ingested by any project. Simple is better.
24 |
25 | ### Ensure Stability
26 |
27 | Any enhancements to the OSPS Assessments and its delivery tooling must not cause breaking changes prior to a scheduled release.
28 |
29 | ### Cautious Incremental Improvement
30 |
31 | New entries must be added with caution, and breaking changes should be extremely rare, even on a scheduled release. Incremental development may enter the repository between releases only if it is mapped to an open GitHub Issue.
32 |
33 | ## Maintainer Consensus
34 |
35 | To reach a decision on an issue or proposal, the proponents must seek maintainer consensus. In the context of this document, "maintainer consensus" means collecting approvals from at least 51% of the current maintainer body, with enough time for all maintainers to review (usually 2 business days), and without a dissenting maintainer opinion.
36 |
37 | ## Review Controls for OSPS Assessments Repository
38 |
39 | Any changes intended to be merged in the OSPS Assessments repository shall meet the following minimal criteria:
40 |
41 | - Commits must be signed off.
42 | - Pull requests must be approved by at least two of the project's maintainers.
43 |
44 | Any repository under the OSPS Assessments organization may impose additional requirements to approve pull requests as long as these minimal requirements are met.
45 |
46 | ## Maintainer Status
47 |
48 | Any community member may be considered as a candidate for maintainer status under the following conditions:
49 |
50 | - A [sponsoring committee] may nominate a community member at any time.
51 | - Any community member may self-nominate as a maintainer candidate after actively contributing to OSPS Assessments on at least a monthly basis for six months or more.
52 |
53 | Nomination shall be in the form of a pull request to update the OSPS Assessments's [MAINTAINERS.md].
54 |
55 | After the nomination is filed and deemed valid, [maintainer consensus] may be sought. Upon achieving consensus, the PR may be merged to confirm the new maintainer.
56 |
57 | ### Sponsoring Committees
58 |
59 | To nominate a community member as a maintainer candidate, a group of maintainers may file a nomination. The committee shall meet the following criteria to be qualified to file the nomination:
60 |
61 | - The number of members in the committee shall not be less than two (2).
62 | - Whenever the number of organizations with maintainers in the project is more than two (2), committee members shall be from different organizations.
63 |
64 | ### Continued Maintainer Status
65 |
66 | Once confirmed as a maintainer, continuation is contingent on regular activity and adherence to the [OpenSSF Code of Conduct](https://openssf.org/community/code-of-conduct/).
67 |
68 | ### Emeritus Maintainers
69 |
70 | Emeritus maintainers will be listed in a separate section on the OSPS Assessments's [MAINTAINERS.md].
71 |
72 | A maintainer who is not currently active on the project may be given Emeritus status by default after six months of no activity, such as pull request interactions or GitHub Issue interactions. A maintainer may also assign themselves Emeritus status through a pull request.
73 |
74 | A maintainer may become active from Emeritus status through [maintainer consensus] and a corresponding pull request.
75 |
76 | ## Revisions to the Governance Model
77 |
78 | The project's governance model shall be revisited every six months to address the needs of the community, as the project recognizes that communities need to steer themselves according to their size, members, and other factors that shape their complexity.
79 |
80 | At any point, a OSPS Assessments Enhancement Proposal may be opened to redefine the project's governance. To be accepted, governing model proposals shall be approved by a qualified majority consisting of a minimum of 66% favorable votes of all active maintainers.
81 |
82 | [MAINTAINERS.md]: /MAINTAINERS.md
83 | [Maintainer Consensus]: #maintainer-consensus
84 | [Sponsoring Committee]: #sponsoring-committees
85 | [guiding principles]: #guiding-governance-principles
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/MAINTAINERS.md:
--------------------------------------------------------------------------------
1 | # Members
2 |
3 | ## Project Maintainers
4 |
5 | - Eddie Knight, Sonatype (@eddie-knight)
6 | - Justin Cappos, NYC (@JustinCappos)
7 | - Andrew Martin, ControlPlane (@sublimino)
8 |
9 | ## Emeritus Maintainers
10 |
11 | - _None_
12 |
13 | Additions and status changes may be made via the processes outlined in the [governance](/GOVERNANCE.md) document.
14 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Open Source Project Security Assessments
2 |
3 | In response to a rising demand for standardized security review of open source projects, the Open Source Project Security (OSPS) Assessments project provides a tiered model for assessing the security state of open source software.
4 |
5 | ## Approach
6 |
7 | OSPS Assessments may come in one of two types: self-assessment or joint-assessment. This tiered approach is intended to reduce complexity through a "shift left" approach that encourages participation from project maintainers while also reducing overall cost and complexity.
8 |
9 | ### Getting Started
10 |
11 | Visit [the Getting Started guide](./guidance/getting-started.md) to determine your first steps, including which of the assessments is right for your situation. In most cases, it is recommended to start at the lowest level, self-assessments, and work up from there.
12 |
13 | Continue reading below for a rapid overview of each assessment type.
14 |
15 | ### Level 1: Self-Assessment
16 |
17 | A self-assessment allows a software project to evaluate its own security posture using a standardized process. This helps identify strengths, areas for improvement, and potential risks without external influence. The results can serve as a foundation for internal discussions, decision-making, and future external reviews.
18 |
19 | ### Level 2: Joint Assessment
20 |
21 | A joint-assessment is a collaborative security review conducted with external security experts or a designated working group. This approach provides an opportunity for project teams to receive constructive feedback, validate security practices, and gain insights from experienced reviewers. The process often includes structured discussions, evidence-based evaluations, and actionable recommendations for strengthening security practices.
22 |
23 | ## Antitrust Policy Notice
24 |
25 | Linux Foundation meetings involve participation by industry competitors, and it is the intention of the Linux Foundation to conduct all of its activities in accordance with applicable antitrust and competition laws. It is therefore extremely important that attendees adhere to meeting agendas, and be aware of, and not participate in, any activities that are prohibited under applicable US state, federal or foreign antitrust and competition laws.
26 |
27 | Examples of types of actions that are prohibited at Linux Foundation meetings and in connection with Linux Foundation activities are described in the Linux Foundation Antitrust Policy available at http://www.linuxfoundation.org/antitrust-policy. If you have questions about these matters, please contact your company counsel, or if you are a member of the Linux Foundation, feel free to contact Andrew Updegrove of the firm of Gesmer Updegrove LLP, which provides legal counsel to the Linux Foundation.
28 |
29 | [how the project operates]: governance/GOVERNANCE.md
30 | [how to report security-related issues]: governance/SECURITY.md
31 |
--------------------------------------------------------------------------------
/guidance/background/response/applying-mitigations.md:
--------------------------------------------------------------------------------
1 | # Applying Mitigations
2 |
3 | **[< Previous: Applying Mitigations](./applying-mitigations.md)**
4 |
5 | Applying mitigations is usually not as simple as just choosing a set of mitigations and applying them to parts of your system. A common mistake that I see novice system designers make is to focus more on the quantity and type of security mechanisms added than focusing on where and why. You need to reason about the goals your system has and then figure out how to intelligently apply mechanisms and controls to meet those goals.
6 |
7 | To understand why, let’s go back to TrashPanda Bank and think about their security. If they buy and deploy the latest alarm system, but apply it to the manager’s snack drawer instead of the bank vault, they will not get the desired security benefits!
8 |
9 | This also helps to explain why it is so important to design security into a system from the start instead of trying to bolt it on afterwards. If you don’t design things well from the start, it is often impractical or even impossible to get the security properties you want later... at least without starting over.
10 |
11 | > [!NOTE]
12 | > From Graphs to Guards, Commentary by Marco De Benedictis
13 | >
14 | > Attack Graphs capture the defenders’ mindset and working process, and so are time-consuming and require significant effort to generate to ensure correctness and the completeness of the paths that an attacker could exploit to achieve a potential goal.
15 | > We can understand if tactical security controls are addressing the most relevant threats by cross-referencing the attack graphs back to the proposed mitigations. This can be practically achieved by overlaying security countermeasures at each individual step, and visually inspecting the branches that aren't properly covered by remediations.
16 | > This visualization allows us to evaluate the effectiveness of our security assessment, and to surface the residual risks by identifying the branches with insufficient security controls and suggesting remediations that satisfy the greatest number of branches at once, taking into account their ease of maintenance, business requirements, and budget implications.
17 |
18 | It is important to have a system that degrades gracefully under attack. This means that an attacker must compromise many parts of the system that are well protected and compartmentalized from each other in order to do substantial harm. So, think of how to make a system that slowly loses security properties as compromises occur, rather than one that has only “secure” and “insecure” states.
19 |
20 | Note that you need to consider lateral movement in a system very carefully when thinking about a system degrading gracefully. If the ability to do X gives one the ability to do Y, then security does not degrade gracefully with respect to these two. If you can only get Y by obtaining the capability for X and Z (which are compartmentalized), then you have actually made the attacker’s life harder than compromising X if their goal is Y.
21 |
22 | Another really key thing to do is to protect all access to something sensitive. (This concept is called complete mediation.) If TrashPanda Bank has a well fortified vault entrance with guards, etc. but has an unlocked, unmonitored window in the vault, the attacker will likely just use that. Violations of complete mediation are extremely common in systems where security was not designed in from the start. The reason is that the defenders may be unaware of an inappropriately secured action or be unable to secure some set of actions due to design flaws.
23 |
24 | **[> Next Up: Security Assessments](./assessments.md)**
25 |
--------------------------------------------------------------------------------
/guidance/background/response/assessments.md:
--------------------------------------------------------------------------------
1 | # Security Assessments
2 |
3 | This guidance categorizes the assessments into three levels, which involve increasing levels of complexity. The purpose of an assessment is to create opportunities for the project to turn a critical eye to the system being built, to ensure the software is safe and stable for all future users.
4 |
5 | ## When should an assessment be done?
6 |
7 | In an ideal world everyone would do a security assessment of their project while forming the design in order to ensure that the design will meet the security goals.
8 |
9 | Most importantly, if you are designing a security focused system, you need to understand what you are trying to protect against. If you haven’t threat modeled the system ahead of time, your design is very unlikely to match your threat model well. This will lead to insecurity as well as bad user experience in many cases. So, at least some lightweight threat modeling in the design phase is standard practice for organizations that write security-focused code.
10 |
11 | In general, the earlier an assessment is done, the more secure the software will be, and the easier it will be to adapt to any design or other changes that are uncovered by the assessment. So, do your best to start early!
12 |
13 | > [!INFO]
14 | > A Health Check and Path to Enhanced Security, Commentary by Ash Narkar
15 | >
16 | > The security assessment process really helped the OPA team understand the overall health of the project from a security perspective. The assessment identified areas of the project that could be improved for example better documentation around secure deployment practices, enhancing OPA's toolchain usability to reduce policy authoring related errors. The OPA project benefited from the recommendations and advice provided by the security experts at CNCF's TAG-Security and our on-going relationship with TAG-Security helps us gain insights into the latest security best practices thereby allowing us to continuously improve OPA's security posture.
17 |
18 | ## Which assessment is right for me?
19 |
20 | At a glance, it may be easiest to review your capabilities and requirements to determine your next assessment level.
21 |
22 | 1. A **self-assessment** is the least complex form, ideal for use during feature planning, or in preparation for a higher level assessment.
23 | 2. A **joint assessment** is more involved than a self-assessment, but is typically achievable for a small team of volunteers who are familiar with the project.
24 | 3. A **conformity assessment** is the most involved process, involving dozens of hours from security and compliance experts, ideal for projects that want to demonstrate conformity with a regulation, standard, or policy.
25 |
26 | When possible, it is advisable to "stair-step" the assessment process to spread the work over time. A conformity assessment is typically a very time consuming and costly activity, but that investment can be used more effectively if some pre-work is done in the form of a self- or joint assessment.
27 |
28 | You can read more about the requirements, process, and outputs from each level:
29 |
30 | - **[> Get Started: Self Assessment](../../level-1/getting-started-self-assessment.md)**
31 | - **[> Get Started: Joint Assessment](../../level-2/getting-started-joint-assessment.md)**
32 | - **[> Get Started: Conformity Assessment](../../level-3/getting-started-conformity-assessment.md)**
33 |
--------------------------------------------------------------------------------
/guidance/background/response/detection-and-tracing.md:
--------------------------------------------------------------------------------
1 | # Detection & Tracing
2 |
3 | **[< Previous: Comprehensive Coverage](./comprehensive-coverage.md)**
4 |
5 | The core concept of defensive security is to take things that are damaging and either make them less likely or less impactful.
6 |
7 | To better reason about this, we will look at several capabilities that a defender often retains even when attacked. Note that this is not an exhaustive list, but these are the most common properties that exist today, so deserve emphasis. The capabilities we will discuss in detail are detection, non-repudiation, recovery, and prevention.
8 |
9 | ## Detection
10 |
11 | Another important aspect is what is done when an attack occurs. In the worst case, the attacker could try repeatedly and the defender would never realize an attack is occurring. This is very common if the attacker can download and run the defender’s software locally on their own infrastructure because then the attacker can experiment with a running copy of the system. This is basically the norm for open source software and is also common for proprietary software.
12 |
13 | If a system is attacked, ideally you’d like to know it. This is where detection comes in. Detection is any means by which you can know you’ve been attacked. Common ways to know this involve logging API calls, examining network traffic, and looking for anomalous events by collecting measurements of anything deemed as deviation from standard system behavior.
14 |
15 | It is common for many systems to be constantly under attack. What is more important is detecting successful attacks and determining their severity. A person who steals a pen from TrashPanda Bank is less of a concern than one who steals the vault keys from the manager!
16 |
17 | ## Non-repudiation / Forensic Traceability
18 |
19 | Once an attack has succeeded, you may have an intruder in your systems. You may need to look through what has occurred to understand which actions an intruder performed and which were legitimate actions by the normal system.
20 |
21 | If in TrashPanda Bank, Bob the teller says that Alice the manager asked him to give her the contents of his cash drawer, but Alice denies this, how do we know who to believe? Well, if Bob got a receipt or there exists a video recording then there may be a way to prove who is lying and who is honest.
22 |
23 | In computer security, this is usually done by having something called non-repudiation. This is where a statement is made such that it later can be proven that a specific party actually made it. This is usually done by the party (Alice, let’s say) signing it with a private key that only she owns. Then any party with Alice’s public key can verify that Alice (or a party who compromised her private key) made that statement.
24 |
25 | So, as you can see non-repudiation is essential for post-attack forensics and should be a goal for any systems with multiple actors.
26 |
27 | Note, it is possible to have differing amounts of non-repudiation and detection in a system. If TrashPanda Bank counts money for the whole bank at the end of each day, the bank may be able to quickly detect if something does not add up. However, this does not mean that they will know who is responsible. Conversely, if TrashPanda Bank has video recordings for all time, but never checks them, then they will not detect problems well, but when they do can figure out exactly what occurred.
28 |
29 | **[> Next Up: Applying Mitigations](./applying-mitigations.md)**
30 |
--------------------------------------------------------------------------------
/guidance/background/response/recovery-and-prevention.md:
--------------------------------------------------------------------------------
1 | # Recovery & Prevention
2 |
3 | **[< Previous: Recovery & Prevention](./recovery-and-prevention.md)**
4 |
5 | Downstream from detection and tracing is recovery, and then that concept naturally leads us back around to the thought of preventing a breach in the first place.
6 |
7 | ## Recovery
8 |
9 | Once you know an attack has occurred, a major goal is to get the attacker out of your system. In some cases, this is very difficult. If an attacker gained the ability to install software as root on your devices, for example, then they could have installed basically any software (rootkits, firmware, etc.) and so you may need to start over.
10 |
11 | Fortunately well designed systems usually have the ability to securely recover from a compromise. Note that it is common to assume that an attacker could act as a man-in-the-middle for your users. So, if you have a compromise of a system, you can’t securely revoke or restore trust using the same key that was compromised. So you will need users to leverage a different key, perhaps using the root of trust, which is compartmentalized and more privileged to securely recover the system to a secure state.
12 |
13 | Recovery is a very important property to have. However, in general, it isn’t possible to recover in every case. After all, if every actor in a system is compromised, it doesn’t seem possible to ever move back to a state with a trustworthy root of trust without starting anew.
14 |
15 | ## Prevention
16 |
17 | Another means to deal with an attack is simply to prevent it from being effective. The previous sentence, using the word “simply” is a bit misleading because this is often one of the most difficult things to do. Well designed systems have this property for most types of attacks.
18 |
19 | Note that you need to carefully be able to argue why you protect against a set of attacks. This includes in what scenarios an attacker is prevented from doing an action. Once again, being rigorous and clear about limitations are absolutely key.
20 |
21 | > [!IMPORTANT]
22 | > Aiming for Full Prevention
23 | >
24 | > Some modern systems provide prevention of only certain attacker actions, in only certain scenarios. They may prevent information from being valuable after a certain point of time or from a key from being exfiltrated after a successful attack. (See HSMs, the concept of perfect forward secrecy, and ephemeral keys, as examples.) These properties are certainly nice to have, but ideally you want full prevention as a goal.
25 |
26 | ## Recovery vs. Prevention
27 |
28 | A natural next thing to consider once you understand the different means by which you can handle a compromise, is whether there is an implicit order so that prevention is always better than detection, for example. It turns out that this is not always the case. For example, suppose that TrashPanda Bank could detect Eve embezzling a small amount of money. Alternatively, they could have a means to prevent Eve from doing so, but not detect her attempt. In this case, TrashPanda Bank’s management may feel it is worth the small financial loss to know Eve is unreliable and fire / prosecute her.
29 |
30 | To consider another example, let’s say that TrashPanda Bank has a super alarm system that can detect when the view of any sensor is blocked momentarily. Unfortunately, TrashPanda Bank is set near a set of cherry blossom trees and when the blossoms fall, they block the sensors, leading to a ton of false alarms. Suppose that TrashPanda set the alarm to automatically ring the police when it was triggered. After being summoned several times, the police are unlikely to respond to alarms for TrashPanda in the future, leading to the police ignoring an alarm on the vault. So, in this case, the security system’s drawbacks may actually degrade security.
31 |
32 | > [!IMPORTANT]
33 | > Overprotection Sometimes Considered Harmful
34 | >
35 | > While the example above is a bit silly, adding a security mechanism does sometimes degrade security in practice. It used to be thought that changing passwords frequently was an important security practice. It was later shown that this made users choose weaker passwords, reuse passwords more often, and led to companies providing more vulnerable means to recover lost passwords.
36 |
37 | However, this all being said, there is actually a practical hierarchy of what defender’s capabilities are usually preferable. Usually prevention is the best because it actually stops the negative outcome from occurring at all. Recovery is really, really important for all but the most unlikely of events. Note that manual effort for recovery is common which is often reasonable. However, this implies that this also should be a rare act to avoid overburdening the poor person who does the recovery. Detection is important, but can be overwhelming if it is overly broad. If you can detect problems, but cannot forensically trace the cause, it can lead to a lot of extra work.
38 |
39 | So, while it is not always true, in general:
40 |
41 | ```text
42 | Prevention > Recovery > Detection w/ Forensic Traceability > Detection > Forensic Traceability
43 | ```
44 |
--------------------------------------------------------------------------------
/guidance/background/security-basics.md:
--------------------------------------------------------------------------------
1 | # Security Basics
2 |
3 | **[< Previous: Getting Started](../getting-started.md)**
4 |
5 | There are so many foundational concepts and technologies you need to understand to reason about security of a cloud native application, that describing them well would require another entire book’s worth of material. Rather than replicate that material here, the reader is directed to resources that contain this information. If you encounter an unfamiliar term in the text, kindly take the time to look it up and understand it.
6 |
7 | Most fundamentally, you should understand key concepts like integrity, non-repudiation, privacy, authentication, authorization, and trust. The Cloud Native Security Lexicon (available at tag-security/security-lexicon at main · cncf/tag-security (github.com)) has a quick overview of basic terms and concepts in computer security which covers these items.
8 |
9 | For encryption, there are a lot of concepts you need to understand and cryptographic systems are very complex. Fortunately, you really just need to understand how to use them correctly and their strengths and weaknesses, instead of why they were designed in the way that they were. You will need to understand (at a minimum) public key cryptography, secret key cryptography, secure hash functions, key length, key distribution, root of trust, certificate formats (i.e., X.509), and certificate authorities. Depending on what you are assessing, understanding trust delegation, HMAC, post quantum cryptography, transparency logs, forward secrecy, and similar concepts may be useful.
10 |
11 | >[!IMPORTANT]
12 | > **Critical Perspective on Broad Promises**
13 | >
14 | > Beware of systems making broad promises due to the use of blockchain, Web 3.0, or decentralization. To date, the proponents of these systems have claimed far greater benefits than what the core technology has been able to deliver.
15 |
16 | As an example, a proof-of-work blockchain is fundamentally a way to keep a distributed, append-only log amongst a set of distributed computers that don’t want to have a trusted centralized party. It is extremely slow and computationally wasteful compared to a centralized trusted server, but there is no longer a single point of compromise. That is, if you assume that the computational nodes have a protocol that provides this property, that the protocol is implemented correctly, and that some threshold (commonly 1/3 or 1/2) of the computational power isn’t held by evil people, etc. It also, by itself, doesn’t ensure that the information in the blockchain is actually valid or useful.
17 |
18 | Interestingly enough, a transparency log uses a lot of the same mechanisms as a blockchain and thus has some similar weaknesses. However, transparency logs currently don’t have the same stigma in the security community in part because the deployment environment and stakes are different. There are large deployments of transparency logs today but they are early enough in their lifecycle that as a community, we really don’t fully understand how and when these systems fail to provide adequate security in the same way we do the other technologies in this section.
19 |
20 | For computational security on a system, you need a basic understanding of access control. This means understanding compartmentalization / isolation as it relates to the operating system or container environment you are using. Access Control Lists (ACL) systems, file / device permissions, su (superuser) ability, system call filtering (seccomp), and capability / tokens are all very important to understand conceptually. Depending on your environment, knowledge of HSMs (Hardware Security Modules) and TPMs (Trusted Platform Modules) may also be relevant.
21 |
22 | There is an additional set of things to understand around user identity, authentication, and authorization. This involves concepts like multi-factor authentication (also called two-factor authentication), hardware tokens (e.g., Yubikeys), and OIDC (a way to log in to a system via authentication through a third party like Google or Facebook). It is key to understand how users are identified and how this is tied to logging events for auditing purposes.
23 |
24 | The last important concept to understand is the fundamental ways in which people design secure systems. Usually, you can find security design flaws by looking for situations that violate these principles and then reasoning about what problem occurs as a result. So understanding concepts like the principle of simplicity, least privilege, fail-safe defaults, least common mechanism, minimizing secrets, open design, complete mediation, and least astonishment [Saltzer and Schroeder, The Protection of Information in Computer Systems] are really fundamental and things every person thinking about security should internalize.
25 |
26 | > [!IMPORTANT]
27 | > **When 'Simpler' Does Not Mean ‘More Secure’**
28 | >
29 | > These principles are not fundamental “laws” of computer security which should never be violated. They are guidelines that often lead to security problems when the are violated.
30 |
31 | For example, the principle of simplicity indicates that the simpler the component, the easier it is to reason about it and thus secure it. Suppose that TrashPanda bank’s system designer learns of this and decides to remove the need to verify client ID cards to simplify the system. Now anyone can withdraw money from anyone else’s account, trivially! This “simplification” has clearly made the system’s security worse.
32 |
33 | So, instead think about the principles when looking at a design and reason if the security would be better or worse if they were followed. Usually, following the design principles will guide you toward security.
34 |
35 | **[> Next Up: Threat Modeling 101](./threat-modeling-101.md)**
36 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling-101.md:
--------------------------------------------------------------------------------
1 | # Threat Modeling
2 |
3 | **[< Previous: Security Basics](./security-basics.md)**
4 |
5 | Security is one of the most critical properties to have in computing today. Unfortunately, it is also one of the most misunderstood. A common mistake people make is to tout something as “secure” or “insecure”. This doesn’t make a lot of sense because it is missing an important context: the scenario.
6 |
7 | The scenario in many non-security real world situations is something that is implicitly defined. For example, if I say “my car is reliable'', you can assume that it almost certainly will not break down on the way to work. However, you should not expect that a “reliable” car would make a good submarine or perform well on Mars. Performing well on Mars is just not what is implied by a general statement of a car’s reliability.
8 |
9 | While usually, one could just look at likely scenarios and determine the rarity of events, there is another aspect of security which makes this not work well: the intelligent adversary. In security, one assumes that an adversary has some ability of control over the system or environment and crucially, that an intelligent adversary will choose to set things up in a way that favors them. So, you may have set up the communication properties on your network to detect or correct 99.9999% of errors in random noise. But unless some secret prevents the attacker from knowing how your error correction works, the attacker can generate network traffic that makes your error correction useless.
10 |
11 | > [!IMPORTANT]
12 | > **Defining Scenarios**
13 | >
14 | > A fundamental aspect of threat modeling is the ability to frame and understand the various scenarios in which a system will operate. A key question that often guides this understanding is, "What are the intended use cases of a system, and where should it not be used?" This line of inquiry doesn't just establish the parameters within which a system is expected to perform but also helps to define the boundaries of its reliable operation.
15 |
16 | Challenging yourself and your team to identify these “out of scope” scenarios or non-uses can be revealing. It prompts a closer examination of implicit assumptions and potential weaknesses. For instance, you could consider a system you're familiar with and ask, "What would be the 'submarine or outer space' equivalent for your system?" Is syscall inspection suited for inspection of ingress traffic? Is a mutating admission webhook effective in enforcing kernel security? This kind of hypothetical questioning can uncover overlooked vulnerabilities and lead to a more robust design.
17 |
18 | This exercise not only broadens the scope of traditional threat modeling but also encourages a proactive approach to security. By contemplating extreme 'out-of-scenario' uses, we can better understand the full range of risks a system may face and fortify it against more than just the probable threats.
19 |
20 | One way that we reason about security in a rigorous way is a process called threat modeling. Threat modeling is sort of like setting up a game between the defender and the attacker. The threat model describes the properties you are trying to provide and the capabilities of the attacker. If the attacker is able to find a way to defeat the defender’s desired security properties, this is a viable avenue of attack. We call such a successful attack, a compromise, and the weakness that lets an attack occur, a vulnerability.
21 |
22 | Note that the term bug and vulnerability are not the same thing. While many bugs do enable an attacker to launch a successful attack, many bugs are just anomalous, benign behavior. Similarly, a design flaw can cause a correctly implemented system to have a vulnerability. So, there need not be a bug in order to have a vulnerability.
23 |
24 | **[> Next Up: Threat Modeling Actors](./threat-modeling/actors.md)**
25 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/actions.md:
--------------------------------------------------------------------------------
1 | # Threat Modeling: Actions
2 |
3 | **[< Previous: Actors](./actors.md)**
4 |
5 | In addition to understanding the actors, it is important to know what actions they perform. This should include the actions that are desirable (at a high level) and how they are carried out, including any checks and balances.
6 |
7 | For example, in TrashPanda Bank, customers may have a list of actions they perform such as opening an account, withdrawing money, checking a balance, renting a safety deposit box, visiting their safety deposit box, and making a deposit. For each of these actions, there needs to be a detailed description of how the process works and how the various steps are verified by different parties.
8 |
9 | An example action may look something like the following.
10 |
11 | ```text
12 | Renting a safety deposit box:
13 |
14 | Requires a customer with a current account to make an in-person request at TrashPanda Bank to a teller.
15 |
16 | The teller processing the request first verifies the customer’s account exists, is linked to the customer (by checking their identification) and has a balance of at least $100.
17 |
18 | The teller then gives the terms and conditions form to the customer, who signs the request. After this is confirmed by the teller, the customer pays the deposit fee to the teller. The teller logs this transaction into their log book and inserts the payment as per the steps in “making a deposit”, except that the remittance goes to TrashPanda’s safety deposit box fund (listed in the teller’s handbook) instead of the user’s account
19 |
20 | The manager is then called by the teller, who re-checks the client’s identification and verifies the remittance to TrashPanda’s safety deposit box was processed by checking the logbook of the teller. The manager now accesses the safety deposit usage map to find an unused safety deposit box, considering customer requests for a specific lucky number or an accessible box. The manager then provides the customer a copy of the key for the box. The teller and the manager use their keys to provide the customer access to the vault, where the safety deposit boxes are kept. The manager and teller leave the vault to provide the customer privacy. Once the customer is finished, they exit the vault and the manager locks the vault again.
21 | ```
22 |
23 | Note that increased complexity of actions does tend to correlate with insecurity, at least if you ignore the complexity added by security steps. A system which does a few simple things is easier to secure in most cases.
24 |
25 | Please don’t mistake this for saying that fewer API calls or system calls means better security. If that were true, we could just have one API call that takes an argument telling it what action to actually perform! This would be a case where the complexity of the API isn’t well reflected by the number of API calls.
26 |
27 | **[> Next Up: Goals](./goals)**
28 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/actors.md:
--------------------------------------------------------------------------------
1 | # Threat Modeling: Actors
2 |
3 | **[< Previous: Threat Modeling 101](../threat-modeling-101.md)**
4 |
5 | We need a term to describe the parties in the system that perform all of the actions in the system and might be erroneous, compromised, or just plain malicious. We call these actors and the things they do actions. It is important to enumerate these up front as they are effectively the “players” in the threat modeling game.
6 |
7 | Back in earlier days of computing, many computer systems were fairly isolated from each other and programs needed to be secure in this environment. Hence the number of actors was small, often just a server, a client, and an attacker. In modern systems that consist of many distributed and isolated components, the number of actors can be very large.
8 |
9 | To see how large modern systems can get, consider an assessment for the [Sigstore project](./#TODO) and the way it might get integrated into an open, community software repository like [PyPI](./#TODO), a community repository of software for the Python programming language. The actors include the PyPI server, the administrators of PyPI, the CA(s) trusted to issue PyPI’s public key, parties that control BGP and/or routers, parties that control DNS, the developers who use PyPI for their software, the CDN that distributes PyPI software, the users downloading that software, and outsiders. If you think it seems overwhelming, consider also that at this point we haven’t even listed the parties for Sigstore, which would be another 10 or so actors!
10 |
11 | However, in coming sections, we will describe techniques that will enable one to quickly categorize groups of actors as equivalent, which helps us to keep this manageable in practice. For example, for many systems a party that can control the network has similar capabilities in many cases independent of whether they control routers, BGP, or DNS. So, for threat models that focus on higher level communication properties between actors over higher level network protocols, the distinction of exactly how an actor controls the network may not matter.
12 |
13 | ## Is It Good Or Bad To Have Many Actors?
14 |
15 | You may think that having more actors automatically makes a system have better or worse security properties. There are two factors that lead to having many actors and they impact the security of a system in opposing ways.
16 |
17 | The first factor is the security principle that complexity tends to lead to insecurity. Simply put, if an attacker can bypass your system by finding a flaw, the more places the attacker can look, the easier it tends to be. Of course, this doesn’t mean you should remove encryption code or security checks because they make the code longer! It just means that all other things being equal, more complexity (i.e. actors) tends to lead to more bugs.
18 |
19 | The second factor includes the principle of least privilege, that a party should have as little privilege as possible, which is the main argument for compartmentalization. Compartmentalization means that when one portion of a system fails or is compromised, it is separated, much like a ship might have protections so if one part of the hull is breached and the internal part is flooded, it doesn’t automatically spread to the entire ship. Compartmentalization helps to contain the attackers capabilities from a single compromise. Consider instead a system with a single point of failure; this has fewer actors, but is clearly weaker from a security standpoint.
20 |
21 | So, you really cannot read too much into the security of a system by the number of actors alone. You need to understand other key aspects of the system.
22 |
23 | ## Compartmentalization of Actors
24 |
25 | A key aspect to consider is the mechanism by which actors are compartmentalized (i.e., isolated) from each other in a system. After all, if the private keys for Alice and Bob are stored on a file system that both have access to, then if either Alice or Bob is malicious, they steal the other one’s key and then can do anything the other’s private key is trusted to do as well. So, it is worth discussing why, how, and when actors are separated from each other.
26 |
27 | Note that this also may depend on the deployment environment. Perhaps some deployments share storage for Alice and Bob for cost reasons. This is important to highlight, as it will become relevant later when we think about the impact of attacks.
28 |
29 | One more note is that having different levels of compartmentalization between actors is common in a system. Most systems have a trusted actor who is responsible for indicating who the other actors are in the system. (This is often a party like a CA, root of trust, root key, or similar.) As a result, this trusted actor can effectively issue false credentials and pretend to be any other party. In contrast, the other actors in the system may have strong isolation between them, making the act of compromising them effectively independent attacks that must be carried out. This degree to which the isolation between parties contains a compromise can be a critical aspect of the system’s security.
30 |
31 | **[> Next Up: Actions](./actions.md)**
32 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/attack-graphs-technique.md:
--------------------------------------------------------------------------------
1 | # Threat Modeling: Attack Graphs Technique
2 |
3 | **[< Previous: Goals](../goals.md)**
4 |
5 | Once you understand the potential attacker(s) and a goal, it is helpful to think through the ways in which they could achieve this.
6 |
7 | hile you can just sit and do this in whatever way you want, it is often useful to reason about this by brainstorming using a tool called an Attack Graph. This is also called an Attack Tree, Threat Tree, or Threat Graph in some literature.
8 |
9 | An attack tree has at the top (which is called the root node), the goal of the attacker.
10 |
11 | For example, the attack tree below has “Open Safe” as the root node, so this is the attacker’s goal. The nodes in the tree (i.e., the square boxes) are connected by one or more edges (the lines between boxes). For two nodes that have an edge, the higher node is called the parent and the lower node is the child. The child node or nodes contain additional details about how to achieve the parent node.
12 |
13 | ```mermaid
14 | graph TD;
15 | A[Open Safe] --> B[Pick Lock]
16 | B --> C[Get Physical access]
17 | B --> D[Pick the lock]
18 | A --> E[Learn Combo]
19 | E --> F[Find written combo]
20 | E --> G[Get Combo from Target]
21 | G --> H[Threaten]
22 | G --> I[Blackmail]
23 | G --> J[Eavesdrop]
24 | J --> K[Get target to state combo]
25 | J --> L[Listen to conversation]
26 | G --> M[Bribe]
27 | A --> N[Cut Open Safe]
28 | N --> O[Get Physical access]
29 | N --> P[Cut Open the lock]
30 | A --> Q[Install improperly]
31 | ```
32 |
33 | In the next stage, we can see that the goal of learning the combination can be achieved in two ways - finding the written combination and getting the combination from the target (who is an authorized individual possessing the combination to the safe) which can further be done in 4 ways. These represent the OR nodes. Success in any one of these attacks leads to success of the ultimate goal of opening the lock. One attack to retrieve the combination from the target includes eavesdropping, which needs the success of two attacks where the victim states the combination and the attacker listening to the conversation. Failure of either results in an unsuccessful attempt to break the lock open.
34 |
35 | > [!NOTE]
36 | > **Unraveling Attack Graphs, Commentary by Justin Cappos**
37 | >
38 | > Attack graphs were really helpful for me when I was first starting to threat model large systems and also are really helpful now when I don’t understand a system well. Today, I often can intuitively go through and enumerate the cases here because I’ve had enough practice. So, I rarely write out an attack graph. (I usually jump straight to attack matrices, which will be described later.)
39 | >
40 | > You can think of the exercise of writing out an attack graph like writing out your multiplication tables by hand before you have them memorized. Eventually it may become second nature, but it will be an immense help at first. If you’re starting out, I strongly encourage you to start with attack trees though and get practice with them. This will help you build the foundation you need to do more accurate threat assessments.
41 |
42 | One problem with attack graphs is you don’t necessarily know how complete they are. There are a wide array of things that you haven’t thought of. Be sure to think back to your system goals carefully and focus on them. When you reason about the situations where those goals hold, think about what those situations mean for an attacker. How is the attacker constrained? What can the attacker do? You may need to update the goals and other parts of the writeup as you go through this process.
43 |
44 | There is a depth of material on attack trees that focuses on adding parameters of different types to them. They can do things like help you reason about what attackers with different skill sets / access / constraints might do in a system or how much an attack might cost an attacker. As you are working through examples, you may find it useful to refer to the following reference: Schneier, B. “Attack Trees.” Schneier on Security, Dr. Dobb's Journal, December 1999, https://www.schneier.com/academic/archives/1999/12/attack_trees.html.
45 |
46 | > [!NOTE]
47 | > Finding Business Value, Commentary from Jack Kelly
48 | >
49 | > For some clients or colleagues Attack Graphs and Trees are a valued deliverable. They are most valued by visual learners and non-technical persons as a tangible representation of what is elaborated on in a Threat Matrix. An Attack Graph helps a reader easily follow from initial breach to the attacker’s goal, and identify which nodes on the graph may be a hotspot either for traversal to other goals, or is used in many possible routes to the same point of impact. This provides a quantifiable justification for the controls used to remediate the threat of attack.
50 |
51 | Attack Graphs can be intensive to build out and maintain, so it is recommended to use a solution that can generate Attack Graphs from code.
52 |
53 | **[> Next Up: DREAD Technique](./dread-technique.md)**
54 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/comprehensive-coverage.md:
--------------------------------------------------------------------------------
1 | # Comprehensive Coverage
2 |
3 | **[< Previous: Understanding Risk](./understanding-risk.md)**
4 |
5 | One common problem is that it is easy to miss one or more cases when doing threat modeling. With distributed systems that have many components, this problem becomes much more common. The reason is that there are many different combinations of components that could be compromised by an attacker and used collectively to do nefarious things.
6 |
7 | For example, suppose that in TrashPanda Bank suppose that the vault is locked and may only be unlocked by the manager’s key and a key from any one of the tellers. People going into the vault are also checked by a security guard to ensure they are escorted in by the manager. All vault entry and exit times are logged by the security guard. The security guard notifies the manager when the customer leaves so that the manager and teller may retrieve their keys and re-lock the vault, which the guard confirms to the manager.
8 |
9 | If you threat model this situation, you also need to consider cases where a malicious security guard can work with a malicious customer to do something bad, by not logging their entry. Suppose the customer enters the vault and starts a fire or does some similar action. If the customer isn’t logged, it will not be possible to know who to blame. Even worse, the guard could potentially add a log entry to indicate that a different customer entered, blaming them for the incident. The system has lost forensic traceability (the ability to know what happened) of these events due to having insufficient protections over the security guard being malicious and working in coordination with a malicious customer.
10 |
11 | Similarly, if a teller and guard work together, they could simply fail to re-lock the vault after a customer leaves. The teller could fail to perform the action and the guard could simply say to the manager that the vault was re-locked.
12 |
13 | ## Attack Matrices
14 |
15 | As the simple example above shows, there can be a number of fairly complex interactions between actors when they could be malicious and act in unison. We could just write one big massive block of text to describe all of the interactions in the system and how different malicious actors can cause harm. This would be really unwieldy to read and to ensure we didn’t miss any cases, so instead we recommend you write it in a way that a reader can more easily reference.
16 |
17 | To do so we use a representation called an attack matrix. An attack matrix is typically written so that the rows of the matrix correspond to a set of actors that are under the control of the attacker. The columns often represent different security designs that you may want to evaluate or things like different capabilities the attacker may have. What you are effectively doing is putting the text for what an attacker can do in the part of the matrix that corresponds to that set of capabilities.
18 |
19 | Let’s look at a few example attack matrix entries for the previous section’s example vault at TrashPanda Bank.
20 |
21 | | Malicious Actors | Impact |
22 | | --- | --- |
23 | | Customer + guard | Loss of forensic traceability from customer malicious actions. Able to falsely blame other customers for malicious actions |
24 | | Teller + guard | Vault may remain unlocked after a customer visits the vault, when this teller and guard are working |
25 |
26 | ## Reducing the number of actors
27 |
28 | Note that if we continue to fill out the matrix above, there will be quite a few rows due to the fact that there are `2 x number of actors` different combinations of malicious actors.
29 |
30 | When the number of actors is even moderately large (like 5 or 6), this can be overly burdensome. Fortunately for us, in most cases the number of interesting sets of actors is actually quite small. For example, if the teller, guard, and manager work together, they can really do as they please and so a customer also being malicious really doesn’t add any further impact to the attacks that can be performed.
31 |
32 | A few useful rules to consider:
33 |
34 | 1. A superset of a set of malicious actors can do at least the union of all subsets of those actors. In other words, if a teller+manager can have impact X, a manager+customer can have impact Y, and a teller+customer can have impact Z. The manager+customer+teller can have any impact from X, Y, and Z. In fact, the impact may be greater than this because X, Y, and Z may be limited by checks the non-malicious party performs.
35 | 1. It is common for many rows to subsume other rows. This is for two reasons. First, once a certain level of compromise is reached, usually the attacker effectively has full control of the system. In this case, additional compromises do not change the security impact of the attack. Second, some parties are so limited that their ability to harm a system has minimal added impact. So, whether they are malicious or not is inconsequential.
36 | 1. Many capabilities are quite easy to get in practice. So, if this is the case, it may be better to assume that an attacker already has those capabilities in all cases in the matrix. For example, it is common to assume a man-in-the-middle attacker who can intercept and modify network traffic. Breaking the table down into attackers that can and cannot do this may make the table unnecessarily long.
37 |
38 | A question arises, if you have different ways to get the same impact, how do you label the row? In an attack matrix, what you do is to take the minimal set of actors that will cause a certain impact and label the row with it. This indicates that any attacker with at least these parties compromised, can perform this action.
39 |
40 | Notice also that in some cases the impact of a compromise on different, disjointed, parties could be the same.
41 |
42 | For example, suppose that teller+guard and manager+guard have the same impact. In this case, it is sensible to write the row as `teller+guard` OR `manager+guard` to save space instead of having two duplicate rows.
43 |
44 | These space saving tips do not fully solve the problem though. Consider that the matrix we wrote before has the customer+guard row (such as above) as well as the potential for us to add a teller+manager+guard row. How do you know which row of the matrix to use? To make this clear to the reader you should sort the attack matrix so that the most impactful attacks are lower in the matrix. When reading an attack matrix and reasoning about a scenario, move down the matrix to find the lowest row that you match and then use this cell to determine the impact.
45 |
46 | For more information about threat matrices, here are some references for further reading:
47 |
48 | - G. Almashaqbeh, A. Bishop and J. Cappos, "ABC: A Cryptocurrency-Focused Threat Modeling Framework," IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2019, pp. 859-864, doi: 10.1109/INFCOMW.2019.8845101. [(Link)](https://arxiv.org/abs/1903.03422)
49 | - Matt Tatam, Bharanidharan Shanmugam, Sami Azam, Krishnan Kannoorpatti, "A review of threat modelling approaches for APT-style attacks", Heliyon, Volume 7, Issue 1, 2021, ISSN 2405-8440,
50 | [(Link)](https://www.sciencedirect.com/science/article/pii/S2405844021000748)
51 | - Rajesh Gupta, Sudeep Tanwar, Sudhanshu Tyagi, Neeraj Kumar, "Machine Learning Models for Secure Data Analytics: A taxonomy and threat model", Computer Communications,
52 | Volume 153, 2020, Pages 406-440, ISSN 0140-3664,
53 | [(Link)](https://www.sciencedirect.com/science/article/pii/S0140366419318493)
54 | - Zhang, L., Taal, A., Cushing, R. et al. "A risk-level assessment system based on the STRIDE/DREAD model for digital data marketplaces." Int. J. Inf. Secur. 21, 509–525 (2022). [(Link)](https://doi.org/10.1007/s10207-021-00566-3)
55 |
56 | **[> Next Up: Detection & Tracing](../response/detection-and-tracing.md)**
57 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/dread-technique.md:
--------------------------------------------------------------------------------
1 | # DREAD Technique
2 |
3 | **[< Previous: Attack Graphs Technique](./attack-graphs-technique.md)**
4 |
5 | The properties of an attack will vary based on the avenue of the attack. Some actions an attacker can only perform once before detection, while others an attacker can do repeatedly. Some require specialized skills, while others can be done by anyone.
6 |
7 | > [!NOTE]
8 | > **From Improbable to Inevitable, Commentary by Justin Cappos**
9 | >
10 | > It is helpful when thinking about attacks to really think outside the box. One exercise I like to do is to “prove” why an attack couldn’t happen. As I’m reasoning through it, I usually come up with the way in which the attack could occur.
11 | >
12 | > For example, consider TrashPanda Bank. If I’m thinking of how to get into the vault, I might think “It’s not possible because there is a guard during the day and an alarm system (which automatically triggers a lockdown) at night. Even if you get past those, you need to have the manager key and a teller key to open the vault.” I would turn thought into “In order to break into the vault, you need to somehow bypass a guard during the day or the alarm system at night. The attacker needs a manager key and teller key...” and then proceed from there to devise under what circumstances this would be possible.
13 |
14 | It is also important to question your assumptions a bit when doing this process. So, you should also consider that this all assumes that the alarm system functions properly, the locking mechanism in the vault operates as designed, the vault was correctly installed and so drilling in, etc. are impractical.
15 |
16 | ## Different attacks can have different impact
17 |
18 | Not every compromise is the same in a system. In some cases an attacker gains only limited access to a system. In others, they may have total control. We talk about these differences by talking about the “impact” of an attack.
19 |
20 | You can think of impact as the monetary cost, reputational cost, etc. to an action having occurred. However, it is hard to know an exact value for this. What is the cost of having leaked a large amount of private customer data? Unfortunately, this seems to happen fairly regularly for some large companies and very little actually occurs. In other cases, a company may face lawsuits from investors and customers or fines from regulators after a security breach. This makes the impact easier to quantify.
21 | Using DREAD to estimate the expected impact of a threat.
22 |
23 | **DREAD** outlines the impact categories of **D**amage, **R**eproducibility, **E**xploitability, **A**ffected users, **D**iscoverability. This mental model provides a measurable means to quantify the impact of an attack by rating the attack between 0-10 in the impact categories, 0 being no impact and 10 being the highest impact. The final impact is the average of the impact across these categories.
24 |
25 | ```text
26 | "Impact score" = (Damage + Reproducibility + Exploitability + Affected users + Discoverability)/5
27 | ```
28 |
29 | Let’s dive into what each impact category means:
30 |
31 | - **Damage:** The potential destruction the attack is capable of causing for the assets in the scope. In this context of information security, information disclosure is the damage. 0 stands for no damage, 10 stands for destruction of the information or information system serving this data causing denial of service.
32 | - **Reproducibility:** Reproducibility stands for how easy is it to reproduce this attack? Is it just very juvenile, or does it need experience to find the vulnerability and cause this attack? A rating of 0 is difficult or impossible and 10 refers to very easy to reproduce.
33 | - **Exploitability:** Complimentary to Reproducibility, exploitability refers to what is needed to ensure the attack is successful? Does it take advanced scripting or tools to exploit the vulnerability or is it as simple as adding the string “ OR 1=1? 0 refers to practically infeasible computational power or sophisticated tools & techniques, whereas 10 refers to just an availability of interface to interact with the target application such as browser or command line etc.
34 | - **Affected users:** Affected users refers to how many users are impacted by this attack, ranging from no users (0) to all non-administrator users to all-users and administrators alike (10).
35 | - **Discoverability:** Discoverability refers to how easy it is to find the attack in the first place. Is it evident in the plain sight (for example use of components with publicly disclosed vulnerabilities, authentication in the URL, or directory traversal) or is it hard to disclose? The scores range from 0 (very hard to discover) to 10 (very easy to discover).
36 |
37 | | Impact Category | Description | Ratings Range |
38 | | --- | --- | --- |
39 | | **Damage** | How bad is the damage? | No Damage = 0
Complete destruction = 10 |
40 | | **Reproducibility** | How easy is it to reproduce this attack? | Difficult to reproduce = 0
Easy to reproduce = 10 |
41 | | **Exploitability** | How easy is it to cause this attack? | Difficult or practically infeasible = 0
Easy to exploit = 10 |
42 | | **Affected Users** | Which users does this attack impact? | No users = 0
All users across privilege levels = 10 |
43 | | **Discoverability** | How easy is it to discover? | Difficult to discover = 0
Easy to discover = 10 |
44 |
45 | DREAD framework eases the threat treatment by putting a number value to the threats in a criteria that novice professionals are also familiar with and could articulate, thus limiting the barrier to entry. While the framework looks seamingly simple, the accurate analysis in complex ecosystems needs extensive information security expertise with up to date knowledge in the domain.
46 |
47 | In practice, many security experts argue that discoverability is both hard to quantify and so often gotten wrong. As a result, it is suggested to use DREAD without trying to estimate D (Discoverability). To do this, you would always mark Discoverability as a 10.
48 |
49 | For more information on the DREAD model, refer to DREAD (risk assessment model) - Wikipedia.
50 |
51 | **[> Up Next: Understanding Risk](./understanding-risk.md)**
52 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/goals.md:
--------------------------------------------------------------------------------
1 | # Threat Modeling: Goals
2 |
3 | **[< Previous: Actions](./actions)**
4 |
5 |
6 |
7 | One of the most important things to do in threat modeling is to understand what an attacker can and cannot do based upon the access they have. In our concept of a “game” this is like the conditions by which the attacker gains points (by violating the goals you have for your system) and the legal moves that the attacker can make toward that end.
8 |
9 | ## System Goals
10 |
11 | Assuming that you are being realistic in your attacker model, the stronger the set of moves the attacker can make, the more secure your system is. To understand why, let’s say that TrashPanda Bank made the assumption that all of its employees were trustworthy and did their job flawlessly. If it turns out that one of the employees is malicious or makes a mistake, then you are now outside the bounds of what you have considered in your assessment. It is as though a player of the game you set up made a move that you thought was not legal, when you did your analysis! This means you don’t have a way of understanding what the impact of an attack would be or whether your security will hold.
12 |
13 | Note that this does not mean you should not implement controls in the following areas! It just means that long held assumptions on the efficacy of these measures should be restated and that they are better used as part of a layered approach to make things harder, instead of infallible controls. They should not be relied on alone to stop a skilled attacker.
14 |
15 | > [!IMPORTANT]
16 | > **Adapting to Modern System Boundaries**
17 | >
18 | > STRIDE has been used for a long period of time, but unfortunately has portions that don’t apply as well to modern distributed systems. So the notion of escalating privilege could be thought better as the ability to move laterally (break the boundaries between actors) in a system. In other words, once an attacker gains access to X, are they able to find a way to get access to Y? This involves a failure to sufficiently compartmentalize X and Y from each other.
19 | > Also, the notions of spoofing and escalation should be thought of in an additional way that a reader may not initially consider. Distributed systems often use a concept called a token (also called a capability in some literature), where an API request contains information to authorize the transaction. In these cases, authentication is not needed. The API request token is sufficient to authorize access. This is much like a movie ticket being sufficient to grant access to a movie. There is no need to check the attendee’s identification, so long as they possess a valid ticket. So, for Eve to gain access to Bob’s data, it doesn’t necessarily mean that she must know Bob’s password. She may have just gained access to a token that some service uses to perform actions on behalf of Bob. She may even confuse the service into doing the actions she wants using Bob’s token. Of course, if tokens are not used and service X is just always trusted to do a set of actions, spoofing and escalation become trivial once you compromise a service!
20 |
21 | ## Common Assumptions
22 |
23 | To make it simpler, there are a set of standard assumptions that most systems make in the current era (this was originally written early 2023). A note for any future reader, these assumptions tend to evolve over time and so may not be reasonable while you are reading this document.
24 |
25 | ### Leadership will not compel the organization to perform actions that violate the security goals of the system
26 |
27 | This includes the government, company management, or a similar agency will not compel the organization to perform actions that violate the security goals of the system.
28 |
29 | While it may seem a fanciful attack to some readers, this is a legitimate attack risk that many companies have faced and in fact do (often silently) face today. For a real world example, consider the pressure on Apple to create a malicious update and unlock the San Bernardino shooter’s phone [Wikipedia: Apple–FBI encryption dispute]. However, most security systems are designed so they will fail in such a case and allow the government, company leadership, or a sufficiently large set of malicious insiders to violate its security goals.
30 |
31 | ### Cryptographic algorithms that are widely thought to be secure, are secure
32 |
33 | This includes public/private cryptography, symmetric key algorithms, cryptographically secure hash algorithms, etc.
34 |
35 | In practice, contests like the ones that NIST holds to choose cryptographic algorithms tend to have produced excellent results. Even when algorithms fail, it tends to be a slow breaking of the algorithm. The breaking of the algorithm is often possible first by parties with a large quantity of computational resources instead of a sudden moment where anyone can trivially break the algorithm. Other standards bodies have a much more mixed record, in particular if their security systems are effectively designed by committee. Look carefully for broad peer review of cryptographic algorithms and security designs, as NIST performs, as an indicator of quality.
36 |
37 | ### Hardware memory protection mechanisms work as designed
38 |
39 | After SPECTRE and MELTDOWN, people in the community realized that there are some ways to use rare cache / memory error behaviors to bypass security protections. For example, a program could read memory in the operating system kernel or in another program. A series of defensive code changes now makes these attacks infeasible on modern hardware (as we understand it). The assumption that memory protections work is common not because it is universally thought that memory protection will absolutely hold in all cases, but largely because not having this assumption makes it too challenging to design security systems. It essentially makes it infeasible to do compartmentalization on a single piece of computing hardware and may make it feasible to cause information disclosure from any component on the same physical hardware. As this is currently an area of active research by hardware security researchers and chip makers, the protections and our understanding of the risks in this domain are likely to evolve over time.
40 |
41 | > **Future-proofing While Maintaining Compatibility**
42 | >
43 | > Note that today, these assumptions are being relaxed by some modern security systems like TUF. For example, TUF supports multiple cryptographic algorithms and has a built-in way to add and remove cryptographic algorithm support while maintaining security properties. This enables secure migration to new algorithms either proactively, or as the need arises.
44 | >
45 | > For example, while there is support in TUF for post-quantum cryptographic algorithms, many adopters may not have enabled it. A TUF repository can enable post-quantum crypto and re-sign its metadata using both algorithms, thus allowing current users to securely transition to the new algorithm and protecting all users versus post-quantum attackers.
46 |
47 | ### An attacker cannot hack a specific component or system
48 |
49 | Modern systems tend to have so much code and tend to use so many libraries, that this just isn’t a reasonable expectation. Even a “proven to be secure” microkernel like SeL4 has had security bugs found in it [SeL4 issue #85, SeL4 issue #86, seL4 Version 9.0.0 Release Notes]. It is important to assume code could have bugs, especially large components, and to design your system to have different isolated compartments so that your system’s security will degrade gracefully when components are successfully breached. This assumption seems to be on the way out, but some systems being created today do still use this assumption. You should assume that such compromises are a matter of when, not if [TAG Security Catalog of Supply Chain Compromises].
50 |
51 | ### A key or other secret will never be leaked, compromised, misgenerated, etc
52 |
53 | Incidents violating this assumption are common [TAG Security Catalog of Supply Chain Compromises]. Modern systems should design revocation mechanisms that retain trust even when an attacker knows a secret and is a man-in-the-middle. Ideally, one should also design the system to prevent substantial harm while you work to address a secret disclosure.
54 |
55 | ### Multifactor authentication (MFA) using SMS is a sufficient barrier
56 |
57 | This assumption isn’t actually a bad one for MFA that does not use SMS. An organization using authenticator apps or hardware tokens seems to do quite well from a security standpoint (barring a few minor hiccups [Wired Magazine: The Full Story of the Stunning RSA Hack Can Finally Be Told], which do not seem to be indicative of a trend). However, the same is not true of SMS based MFA systems, which have been shown to be vulnerable to attack. So, do try to have your organization not only mandate MFA, but choose a means of performing it which provides a level of security appropriate for what you are protecting.
58 |
59 | ### The complexity of parsing code for a complex format is not particularly relevant
60 |
61 | This is a common mistake that organizations make, where the code to parse data formats or keys becomes a major liability. The number of X.509 certificate parsing errors alone that have led to security vulnerabilities is astonishing [MatrixSSL: Security Vulnerabilities]. A related problem in this space is that even just getting a format serialized into a consistent format is a more difficult challenge than many developers initially realize. So the complexity of the data communication and storage format should be a major concern, especially for sensitive API calls and components.
62 |
63 | ### Operating system user access control protections like file permissions are an impassable barrier
64 |
65 | It turns out that it is often not that difficult to escalate privilege when gaining access to an account on a system. The reason is that the operating system’s system call boundary is massive and hard to employ effective controls on. You should not willingly let attackers into a system and rely on user permission bits, file ACLs, etc. as your only means of protection. Rather think of these as a barrier that may slow or trip up an attacker, but are not reliable as a line of defense.
66 |
67 | ### The network cannot be tampered with
68 |
69 | It turns out that becoming a man-in-the-middle is possible in many scenarios, including wireless attacks in a coffee shop, BGP route hijacking, DNS cache poisoning, etc. While it isn’t trivial for any person to become a man-in-the-middle for a network path between two randomly selected computers, it certainly isn’t unobtainable for a large and important class of attackers.
70 |
71 | ### Software provided by dependencies are secure so long as we take care when adding them
72 |
73 | Attackers in some ecosystems have begun attacking software projects by taking over a dependency and adding malicious code. In other cases, a dependency is simply neglected for a long time and does not receive security patches. In yet other cases, an organization simply forgets or neglects to update dependencies to a later version so that a vulnerable version remains in use. Like the software your organization writes itself, dependencies need care, examination, and attention so that they do not become liabilities.
74 |
75 | ### Firewalls keep out bad guys
76 |
77 | Firewalls are an important tool for helping to provide compartmentalization of networked components. However, experience shows that they are insufficient on their own. In practice, many attacks involve an attacker bypassing firewalls and network monitoring systems to access things that should have been restricted. This is not surprising given how difficult it is to write a policy that stops exactly all of the “bad things” and allows exactly all of the “good things”. So, it may be helpful to think about this as a way to increase the difficulty for an attacker rather than as a means for stopping them outright.
78 |
79 | ### Antivirus stops malware on end hosts
80 |
81 | In much the same way, antivirus software on client machines largely just helps to make certain compromises less likely, but comes with its own risks and concerns. Today many experts recommend only using the antivirus software that comes with your operating system (if applicable). However, purchasing commercial antivirus software gives questionable benefits and does come with some added risk.
82 |
83 | ### Trained users will to choose and manage sufficiently secure passwords
84 |
85 | This is patently false, which is one reason why multi-factor authentication is an option or even a requirement for many systems. Strong password guidelines for users are important. Users should also be incentivized to use tools like password managers.
86 |
87 | > [!NOTE]
88 | > A Methodology for Identifying Discrepancies with Respect to Privacy Regulations, Commentary by Ragashree Shekar
89 | >
90 | > In the current landscape, with more and more data generated from each of us through the surplus connected devices we use, it also gives an opportunity to gather more and more data about us and utilize it to enhance their business. It is about time privacy is engineered into each project we build that collects personal, health or protected user information not just to comply with the regulations, but also to protect user’s right to privacy. LINDDUN[1] is a privacy engineering framework that helps model the system, find and manage the threats associated with this system. LINDDUN categorizes the threats into 7 categories such as Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of Information, Unawareness, and Non-compliance. Let’s look at each one of them:
91 | >
92 | > - Linkability: Whether an attacker is able to link two items of interest without knowing the data subject [2] corresponding to these items
93 | > - Identifiability: Whether the attacker is able to identify a data subject from a set of data objects through items of interest
94 | > - Non-repudiation: A data subject cannot deny an action
95 | > - Detectability: An attacker is able to distinguish whether an item of interest about a data subject exists or not, regardless of being able to read the contents itself
96 | > - Disclosure of Information: An attacker is able to learn the content of an item of interest about a data subject
97 | > - Unawareness: The data subject is unaware of the collection, processing, storage or sharing activities (and the corresponding purposes) of the data subject’s personal data
98 | > - Non-compliance: The processing, storage, or handling of personal data is not compliant with legislation, regulation, and/or policy
99 | >
100 | > A few notes to consider. First, Identifiability and Linkability are closely associated with each other as lack of anonymization affects results in identifying 2 data subjects or linking two different data objects.
101 | >
102 | > Second, Awareness is a big part of the privacy laws, regulations and policies and failing to inform the data subject of what data about them are being collected, how it would be processed/used, who else would it be shared/sold to, and to let the data subjects decide if they want to opt-in. Thus unawareness is a subset of non-compliance.
103 | >
104 | > However, it is difficult to assess the impact of a vulnerability by using these criteria. This method is intended more for compatibility analysis with respect to privacy regulations than for searching for technical vulnerabilities.
105 |
106 | ## System Non-goals
107 |
108 | In addition to goals, another key aspect to consider are things that you consider to be non-goals of the system. These are “illegal moves” in the game. They tend to come in two types, the first being things that you simply do not care about if they occur.
109 |
110 | For example, TrashPanda Bank is likely well aware that people off the street may wander into the bank. Some of those people may steal a pen or the deposit sheets that are left out on the desks. They may use the bathroom and enjoy the heat / air conditioning without being a customer. However, TrashPanda Bank may also just assume that those costs are minimal and any effort to deter such actions would have a negative impact on the experience of other customers. So, solving these types of issues may be a non-goal.
111 |
112 | The second type of common non-goal is one that seems too fanciful for the attacker to carry out. For example, let’s say in order to break into TrashPanda Bank, the attacker will become president of the country and launch a nuclear strike on the vault. Whether or not the vault resists such an attack, any surviving members of the company are likely to be focused on things other than the vault. So, TrashPanda Bank could consider worrying about such an attack a non-goal.
113 |
114 | ## Attacker Goals
115 |
116 | Another way to frame the system goals is to talk about what an attacker may want to accomplish. This is sometimes (mis-)used to say that these goals are the only things an attacker would want to do, and so the system's goals should just be to prevent those. Unfortunately this line of reasoning will often miss cases because it is assumed the attacker will simply not care to perform them. In the movie The Dark Knight, there is a famous (and long) story told by Alfred, which concludes with the statement “Some men just wanna watch the world burn”.
117 |
118 | You should assume that someone will have the temptation to do a bad thing if it is possible to do so without a massive amount of skill and resources.
119 |
120 | Additionally, you should assume that a compromise may be the result of a negligence rather than intent. If the new guy deletes a production database, many other layers of security have been neglected prior to that mistake.
121 |
122 | ## Using STRIDE To Enumerate Attacks and Goals
123 |
124 | Fortunately, security researchers have long understood that it is too easy to miss computer security concerns when threat modeling.
125 |
126 | To aid in going through different cases, there is a model called STRIDE. STRIDE stands for the following properties:
127 |
128 | - Spoofing: The act of using another’s credentials. This can be for many purposes, such as gaining access to a resource they should not have access to, or masking the source of an attack. Commonly, this is done by authenticating as a different user when performing an activity.
129 | - Tampering: The act of modifying information in a malicious way. This depends a lot on the project, but can involve things like replacing a user’s data with something else, manipulating account balances, or changing log information.
130 | - Repudiation: The act of performing an action but asserting you did not in situations where others cannot prove otherwise. This involves situations where the attacker makes tracing the cause of a problem infeasible.
131 | - Information Disclosure: This is when you make private information public. Situations where this occurs typically involve data leaks of user account data, private messages, financial details, etc.
132 | - Denial of Service: This is where an attack prevents legitimate users from accessing information or services they are supposed to have access to. This can be very localized, such as locking a user out of their account, or very broad, such as bringing down an entire website. Attacks of this type are sometimes done using a set of computers that work together to attack a system. An attack of this type by a distributed set of computers is called a DDoS attack (Distributed Denial of Service attack), which you likely have heard mentioned before.
133 | - Escalation of Privilege: This is the act of gaining more authorization to perform actions in a system that should not be granted. Note that while Spoofing focuses on appearing to be someone else, Escalation of Privilege focuses on using an identity you have to do things which should not be authorized. For example, consider the administrative assistant for TrashPanda’s CFO. For withdrawals over a certain amount, the CFO may be required to place her signature on the transaction confirmation. However, if the administrative assistant for the CFO states that the CFO authorized it, the teller may (incorrectly) still complete the transaction. This is a case where the administrative assistant has escalated his privilege to do an action he was not authorized to perform.
134 |
135 | | Attack | Property Violated | Impact |
136 | |-------------------------|--------------------------|--------------------------------|
137 | | Spoofing | Authentication | Misdirected identity |
138 | | Tampering | Integrity | Unreliable data |
139 | | Repudiation | Non-repudiation | Lack of ownership for actions |
140 | | Information disclosure | Confidentiality, Privacy | Lack of confidentiality |
141 | | Denial of service | Availability | Unreliable service |
142 | | Escalation of privilege | Authorization | Grants Unauthorized access |
143 |
144 | > ![NOTE]
145 | > Tracking Trash Pandas with Data Flow Diagrams Commentary by Ann Wallace
146 | >
147 | > In my experience, a Data Flow Diagram (DFD) is invaluable for threat modeling, providing a detailed view of data management within a system to identify and mitigate security risks. DFDs give a thorough and detailed view of how data is managed within a system, which is critical for identifying, examining, and mitigating potential security risks.
148 | >
149 | > These diagrams portray data movement within a system, shedding light on vital areas where data is entered, exits, and is processed. This level of detail is key in spotting vulnerabilities. DFDs are especially adept at uncovering potential points where an attacker could access or extract data. They cover the full spectrum of the system we're analyzing for threats, embracing all the internal and external components, like various entities, actors, data storage, and data flows.
150 | >
151 | > DFDs also enhance communication, clarifying system data handling and risks to stakeholders, aiding in prioritizing security measures. Clear, understandable DFDs are vital for all involved to identify key components and understand control paths.
152 | >
153 | > For instance, a DFD for TrashPanda Bank would map money flow, highlighting entry/exit points, customer involvement, asset storage, trust boundaries, and processes like bank teller and ledger operations. This facilitates comprehensive threat analysis, examining potential data interception/manipulation points, and assessing security measure effectiveness, ensuring robust protection against security threats.
154 |
155 | **[> Next Up: Attack Graphs](./attack-graphs-technique.md)**
156 |
--------------------------------------------------------------------------------
/guidance/background/threat-modeling/understanding-risk.md:
--------------------------------------------------------------------------------
1 | # Understanding Risk
2 |
3 | **[< Previous: DREAD Technique](./dread-technique.md)**
4 |
5 | A very useful concept when thinking about security assessments is the concept of risk. Rather than simply categorize things as possible and impossible, risk lets us try to understand how likely they are. If you have two equally negative outcomes which could be addressed with the same amount of effort, the more likely one is the one to focus on first.
6 |
7 | Unfortunately, there really isn’t a solid way to know how likely certain events are in computer systems. These are uncommon events and advances that attackers make lead to huge advances in attack capabilities. However, in general, most people underestimate unlikely events. To be blunt, the state of the field is commonly that one tries to list existing anecdotal examples and once that occurs in a sufficiently public way, everyone seems to agree that this is now something to be concerned about.
8 |
9 | Realistically, you get the most value out of understanding roughly how likely things are 1-in-100 vs 1-in-a-million vs 1-in-a-trillion, etc. versus trying to put an exact number.
10 |
11 | > [!NOTE]
12 | > **The Thousandfold Misconception, Commentary by Justin Cappos**
13 | > I worked with Evan Gilman, Matt Moyer, and Enrico Schiattarella from the SPIFFE / SPIRE team on a threat assessment and as part of it we tried to quantify risk. We each did this independently for aspects of the system; our answers often varied by more than 10. In fact, in one case it varied by more than a factor of 1000! After discussing these differences, we began to better understand ways in which our mental models differed about how the system could be deployed. This was a really useful exercise for us even though I don’t think any of us put a lot of faith that the values we ended up with are close to the real value.
14 |
15 | ## Expected Damage
16 |
17 | So if one understands the likelihood of things happening, how does that help if the impact of those things differs? Well, fortunately, there is a simple formula to compute the expected damage from an attack:
18 |
19 | ```text
20 | Expected damage ~= likelihood * impact
21 | ```
22 |
23 | For example, if something has a 1-in-100 chance of occurring on a specific day, and costs you $1000 when it occurs, you expect that the amount you’ll have to pay over a long period is about $10 per day.
24 |
25 | When addressing risks, you can look at how much your protection would cost (in terms of effort, money, etc.) and how this changes the expected damage. This would be an ideal way to prioritize how to work on things. So why don’t we do this? Because the actual values for likelihood and impact aren’t really known in practice. So understanding “that things that are likely and high impact are really bad and need to be addressed” is going to be more useful in practice than the actual formula will be.
26 |
27 | > [!NOTE]
28 | > The Value of Precedence, Commentary by Andrew Martin:
29 | >
30 | > We have found it helpful to list the remediations and controls from a threat model in precedence order. The recipient of a threat model is likely to be a risk owner such as a CISO or equivalent holder of funds, and the model should inspire them to remediate immediate existential threats, or threats with unacceptable impacts on business functionality, and consider which of the other scoped threats are worth investing in.
31 | >
32 | > Expected damage is a useful metric for risk management at the executive level, and it can be modulated with the secondary data point of likelihood — existential risks should be addressed in some manner, but it’s also acceptable to mitigate them in other ways (such as transferral with disclaimers or insurance policies, or acceptance of low likelihood). Each mitigation is a complex tree of possibly catastrophic permutations and so should be explicitly addressed by the risk owner.
33 |
34 | **[> Up Next: Comprehensive Coverage](./comprehensive-coverage.md)**
35 |
--------------------------------------------------------------------------------
/guidance/getting-started.md:
--------------------------------------------------------------------------------
1 | # Getting Started
2 |
3 |
8 |
9 | This guidance describes security assessments, including what a security assessment is, how it differs from a security audit, how to perform a security assessment, and how to use a completed assessment.
10 |
11 | These contents are heavily informed by the Security Assessment process developed by the CNCF Security Technical Advisory Group and authored by Justin Cappos (STAG Technical Lead). This draws on years of compound experience analyzing and evaluating security products across a wide array of domains. The examples in this text draw from both non-technical anecdotes and a variety of detailed technical examples from Linux Foundation projects in the cloud native space.
12 |
13 | It is recommended to follow the guide one step at a time, rather than seeking to read and understand the process in completeness. You will internalize more by attempting the exercises yourself.
14 |
15 | ## Identifying Your Use Case
16 |
17 | How you engage with this guidance will depend on your use case:
18 |
19 | ### You are preparing to have your software assessed by peers or a third-party
20 |
21 | Please do a quick read of the knowledge base before working with reviewers. If you haven't done a self-assessment yet, consider setting that as your interim goal — prior to bringing in reviewers from outside the project.
22 |
23 | ### You want to learn about threat modeling and software security
24 |
25 | Many are helpful to learn about threat modeling and how to assess the security of general projects. Perhaps the least relevant part are the portions of this book that relate to the specifics of Security Assessments. However, those sections can serve as an example of how to implement some of the ideas in the rest of the book in the cloud native space.
26 |
27 | ### You want to lead or participate in an assessment
28 |
29 | You should read as much of the content as possible. This includes the self-assessment content if the project you're assessing is supplying a completed self-assessment to kickstart your review.
30 |
31 | The sections describing how to use an assessment and how to have your project assessed effectively may be less applicable to you, but will help you understand the process and expectations from those perspectives.
32 |
33 | ### You are evaluating the security posture of a project with a published security assessment
34 |
35 | You, the consumer of this hard work, need to understand how best to benefit from a security assessment. The section on consuming assessments is exactly what you need. It may also be useful to read the following section on Security Assessments and Audits, to understand the difference and why you should expect to see relatively few CVEs raised after a security assessment versus a security audit.
36 |
37 | ### If you're still not ready to get started
38 |
39 | As with many things in security there is often not one “correct answer”, despite there being infinite wrong answers. If you would like to ask questions or help improve this guidance, please don't hesitate to engage through the designated [community channels](./CONTRIBUTING.md).
40 |
41 | **[> Next up: Security Basics](./background/security-basics.md)**
42 |
--------------------------------------------------------------------------------
/guidance/level-1/appendix.md:
--------------------------------------------------------------------------------
1 | # Appendix
2 |
3 | **[< Previous: Development & Support](./development-and-support.md)**
4 |
5 | This section provides references and additional information that may benefit readers, including historical security data, project comparisons, and best practice alignment.
6 |
7 | ## Known Issues Over Time
8 |
9 | If the project has encountered vulnerabilities in the past, summarize key statistics and link to reports. If no vulnerabilities have been reported, provide insight into security measures that have prevented issues, such as:
10 |
11 | - Track record of catching issues in **code review** or **automated testing**
12 | - Number of security fixes in past releases
13 | - Metrics from static analysis or fuzzing tests
14 |
15 | [+ View Security Issue Tracker](../security/issues.md)
16 |
17 | ## OpenSSF Best Practices
18 |
19 | Discuss the project's alignment with [OpenSSF Best Practices](https://openssf.org/). Consider:
20 |
21 | - Whether the project has obtained an OpenSSF Best Practices Badge
22 | - Gaps that need to be addressed to achieve compliance
23 | - Plans for improving adherence to security best practices
24 |
25 | [+ Check OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/)
26 |
27 | ## Case Studies
28 |
29 | Provide real-world examples of how the project has been used. These scenarios help reviewers understand security considerations in practical applications.
30 |
31 | Example:
32 | > *A financial institution integrated Flibber to encrypt cloud-based virtual machines, reducing unauthorized access incidents by 85%.*
33 |
34 | [+ Read More Case Studies](../case-studies/index.md)
35 |
36 | ## Related Projects & Vendors
37 |
38 | Prospective users may compare your project to similar solutions. Address common questions by listing:
39 |
40 | - **Competing or complementary projects**
41 | - **Key differentiators** (e.g., performance, security focus, integrations)
42 | - **Relevant vendors** that provide commercial support
43 |
44 | Example:
45 |
46 | > *Flibber vs. Noodles: Unlike Noodles, Flibber provides built-in encryption without requiring additional configuration.*
47 |
48 | [+ Read More Comparisons](../comparisons/index.md)
49 |
50 | ## OpenSSF Scorecard
51 |
52 | If your project uses the OpenSSF Scorecard, include a reference to your latest score.
53 |
54 | [+ View OpenSSF Scorecard](https://github.com/ossf/scorecard)
55 |
56 | ---
57 |
58 | This appendix serves as a reference point for stakeholders, helping them understand the project's security history, ecosystem positioning, and best practice compliance.
59 |
60 | **[> Next Up: Self Assessment Template](/templates/self-assessment.md)**
61 |
--------------------------------------------------------------------------------
/guidance/level-1/creating-document.md:
--------------------------------------------------------------------------------
1 | # Creating Your Self Assessment Document
2 |
3 | Before we dive in to the contents of the assessment, let's consider how and where it will be published. This is not a critical or essential consideration, but it will make things easier if you can lock this in sooner rather than later.
4 |
5 | ## Scope
6 |
7 | Determining what should be covered in your self-assessment may be straightforward, or it may feel daunting at first.
8 |
9 | Most projects will only need a single assessment document that is maintained over time as the design changes or new information is gathered. For example, the [Flux project](https://github.com/fluxcd) has multiple repositories that are compiled into a single deliverable, needing only a single assessment.
10 |
11 | It is possible that your project will be best served by multiple self-assessments if it contains multiple disconnected parts. We can look at two projects as examples of this: [Argo](https://github.com/argoproj) and [Privateer](https://github.com/privateerproj).
12 |
13 | Argoproj is made up of multiple independent but complementary elements: Argo CD, Argo Workflows, Argo Rollouts, Argo Events (and more smaller pieces). Because each of these can be used independently of the others, it is best (and easiest) to assess them one at a time.
14 |
15 | On the other hand, Privateer is composed of multiple independent elements that rely on each other to work properly: the core, plugins, and an SDK that enables their development. Privateer and the Privateer SDK are tightly coupled, so we will want to include them both in our assessment. But because plugins each contain their own custom logic that is not always maintained by the Privateer project, and no plugin interacts with another, it is best to assess Privateer and each plugin _one at a time_ — in spite of the fact that the two concepts are part of the same ecosystem.
16 |
17 | If any elements interact with each other by design (such as in the second example), there will be space within the assessment to discuss that relationship without going into a detailed assessment of the items it relates to.
18 |
19 | Once you’ve determined the scope of your assessment, you’ll be ready to get started documenting it!
20 |
21 | ## Format
22 |
23 | We will be creating our example self-assessment in Markdown because of its compatibility with open source repositories. This easy-to-learn format will be automatically parsed and beautified when loaded to repo hosts such as GitHub.
24 |
25 | You may simplify the self-assessment process by starting (and potentially staying) in another format, such as a Drive/Word document. Use whatever format is best for you to craft and distribute your self-assessment to your project’s stakeholders.
26 |
27 | ## Publishing
28 |
29 | Where you publish is up to you to determine, as your project may have different processes for distributing information to stakeholders, such as contributors, maintainers, owners, and users. If your assessment is part of a larger organization, all assessment documentation should be maintained in a centralized location for quick reference by the community, its technical advisors, and its leadership.
30 |
31 | **[> Next Up: Header & Metadata](./header.md)**
32 |
--------------------------------------------------------------------------------
/guidance/level-1/development-and-support.md:
--------------------------------------------------------------------------------
1 | # Development & Support
2 |
3 | **[< Previous: System Design](./system-design.md)**
4 |
5 | This section describes the development practices, communication channels, and security processes that support the project’s lifecycle. Providing this information helps reviewers understand how security is managed throughout development and maintenance.
6 |
7 | ## Development Pipeline
8 |
9 | Describe the testing and assessment processes that the software undergoes as it is developed and built.
10 |
11 | Here are some things you might consider including:
12 |
13 | - What security practices do you automate or enforce in your SDLC?
14 | - Do you have branch protection or repo security features in place?
15 | - Are committers required to sign their commits, or a contributor license agreement?
16 | - Do you have automated testing or fuzzing on every pull request?
17 | - Do you have software composition analysis or dependency management tooling?
18 | - How many reviewers are required for a pull request to be approved?
19 | - Do you have any measures around code owners?
20 | - Is your release process automated?
21 | - Does every release include an automatically generated Software Bill of Materials?
22 | - Do you sign releases?
23 | - Are container images immutable and signed?
24 |
25 | ## Communication Channels
26 |
27 | Define how different audiences can reach your team and how communication is structured.
28 |
29 | - **Internal** – How do team members communicate with each other? (e.g., private Slack, internal mailing lists)
30 | - **Inbound** – How do users or prospective users communicate with the team? (e.g., public mailing list, GitHub issues)
31 | - **Outbound** – How do you communicate with your users? (e.g., security advisories, release announcements)
32 |
33 | ## Ecosystem
34 |
35 | Describe how your software fits into the cloud-native ecosystem.
36 |
37 | For example:
38 | > *Flibber integrates with both Flocker and Noodles, covering virtualization for 80% of cloud users. While Flibber has a small direct user base, every virtual instance uses Flibber encryption by default.*
39 |
40 | Understanding your project's ecosystem impact helps assess its security significance.
41 |
42 | ## Responsible Disclosure Process
43 |
44 | Your project should have a process through which a responsible user or researcher can disclose findings related to vulnerabilities or weaknesses in your project, and the process a project will take in the event that a report is received or another security incident occurs.
45 |
46 | If you’re using GitHub, there is a built-in feature for this within the Security tab at the top of your repository page. If you’re using another strategy, detail your approach here.
47 |
48 | Include a reference to where your project has documentation letting readers know how to make responsible disclosures for your project.
49 |
50 | - **Reporting** – How should suspected vulnerabilities be reported? (e.g., security email, private GitHub advisory)
51 | - **Response Team** – Who is responsible for triaging reports?
52 | - **Coordination** – Do you follow an embargo period before public disclosure?
53 | - **Patch Process** – How are fixes developed, tested, and released?
54 |
55 | ## Incident Response
56 |
57 | A major part of secure software development is simply a matter of planning ahead for when things go wrong. Vulnerabilities and weaknesses will eventually be found, and proper planning will enable your project to quickly and effectively respond.
58 |
59 | Use this section to document your project’s process for triage, confirmation, notification of vulnerability or security incident, and patching/update availability.
60 |
61 | If your project lacks a comprehensive plan for incident response, then include as much detail as you can—and be sure to include this gap on your project’s roadmap!
62 |
63 | - **Triage & Confirmation** – How do you validate and assess security reports?
64 | - **Notification** – How do you inform users about vulnerabilities?
65 | - **Remediation** – How do you develop and distribute patches or updates?
66 |
67 | A clear incident response process ensures vulnerabilities are addressed efficiently while minimizing disruption.
68 |
69 | **[> Next Up: Security Considerations](./security-considerations.md)**
70 |
--------------------------------------------------------------------------------
/guidance/level-1/getting-started-self-assessment.md:
--------------------------------------------------------------------------------
1 | # Getting Started: Self Assessment (Level 1)
2 |
3 | This guide is derived from the course [Security Self-Assments for Open Source Projects (LFEL 1005)](https://training.linuxfoundation.org/express-learning/security-self-assessments-for-open-source-projects-lfel1005/) distributed by the Linux Foundation Education.
4 |
5 | By the end of this guide, you should have a good understanding of what a security self-assessment is and you’ll be prepared to dive in to create one of your own.
6 |
7 | ## What is a Security Self-Assessment?
8 |
9 | There are as many different meanings behind the term security self-assessment as there are behind the word security. Whether you are securing a physical location, an event, software, or anything else, there is value in a self-assessment—but that assessment will be different based on the context.
10 |
11 | The benefits will be the same whether we’re securing elections or websites. The process of a self-assessment accomplishes several things:
12 |
13 | - Provides the responsible parties with a refined perspective to the security status quo
14 | - Streamlines security improvements by highlighting areas of improvement
15 | - Provides stakeholders with key information regarding security progress
16 | - Accelerates future assessments by clearly documenting answers to common security questions
17 |
18 | In this guide, we’re looking to secure an open source project repository. Similar principals will apply to private software repos, though some of the implementations may differ.
19 |
20 | > [!WARNING]
21 | >
22 | > While this may help streamline a threat model, it is very different. Check out the [threat model](https://github.com/argoproj/argoproj/blob/main/docs/end_user_threat_model.pdf) created by ControlPlane for Argo CD to learn more.
23 |
24 | ## Self Assessment Format
25 |
26 | This guide will walk you through the process of creating your own security self-assessment documentation.
27 |
28 | We will be following the recommendations provided by the Cloud Native Computing Foundation to their myriad of open source software projects. The standard of self-assessment is encouraged by TAG Security for sandbox and incubation projects, and it is incentivized by the Cloud Native Security Slam for all CNCF projects to revisit their self-assessment annually.
29 |
30 | Anyone familiar with your project can contribute to the creation of the self-assessment, but it is important that the effort contains a high level of engagement from the project’s leadership—a full review and endorsement at minimum–to ensure that nothing is missed and that any findings are incorporated into the project roadmap.
31 |
32 | We will be also creating a self-assessment for our own little open source project—a text execution harness that has been on the back burner for too long, and is getting ready for new contributions. Now is a great time to evaluate the progress and gaps related to security!
33 |
34 | Hopefully your project has better security considerations already in place… but if not, don’t feel bad! This process is designed to help us better understand our project and triage the necessary work.
35 |
36 | Here are the items we’ll be creating:
37 |
38 | - Metadata: security links
39 | - Overview: actors, actions, background, goals, and non-goals
40 | - Self-assessment use
41 | - Security functions and features
42 | - Project compliance
43 | - Secure development practices
44 | - Security issue resolution
45 | - Appendix
46 |
47 | ## Let's Dive In
48 |
49 | If you feel like you aren’t ready to begin with your self-assessment, consider why that might be the case. If you feel that your project isn’t ready, or you don’t have all the answers right now, don’t let that stop you from starting! Simply leave notes along the way so that you have a strong first iteration of the self-assessment.
50 |
51 | **[> Next Up: Scope](./scope.md)**
52 |
--------------------------------------------------------------------------------
/guidance/level-1/header.md:
--------------------------------------------------------------------------------
1 | # Header Content
2 |
3 | **[< Previous: Getting Started](./getting-started-self-assessment.md)**
4 |
5 | Let's kick things off by putting in some basic information about the project and the assessment itself.
6 |
7 | ## Header & Opening
8 |
9 | Let's open up the document with some very basic details.
10 |
11 | The document title should be your project’s name followed by “Self-Assessment”
12 |
13 | Directly beneath the title, include plaintext information specifying who conducted this assessment and identifying the project maintainers. The provided template includes helper text with blanks you can fill in to streamline this step.
14 |
15 | ## Table of Contents
16 |
17 | If you're using a word processor like Google Docs or Microsoft Word, take advantage of their automatic table of contents generation.
18 |
19 | For Markdown-based documentation, you can create self-referencing links within the document—these will become functional once the relevant sections exist.
20 |
21 | Expect to revisit and update the table of contents after adding content. If you introduce new subsections for complex topics, be sure to reflect those changes here. If your document significantly expands upon the template, AI tools may assist in generating or updating the table of contents efficiently.
22 |
23 | **[> Next Up: Metadata](./metadata.md)**
24 |
--------------------------------------------------------------------------------
/guidance/level-1/metadata.md:
--------------------------------------------------------------------------------
1 | # Metadata Content
2 |
3 | **[< Previous: Header Content](./header.md)**
4 |
5 | The first full entry in your self-assessment will be the metadata values—key quick-view information about your project. These metadata fields provide essential context for stakeholders reviewing your security posture and project status.
6 |
7 | ## Fields
8 |
9 | As seen in the template, the following fields are recommended.
10 |
11 | ### Assessment Stage
12 |
13 | Indicate the current status of your self-assessment. This helps stakeholders understand how up-to-date the information is.
14 |
15 | - **Incomplete:** The assessment is still in progress.
16 | - **Complete:** The assessment is finalized and reflects the project's current state.
17 | - **Obsolete:** The assessment is outdated and no longer maintained.
18 |
19 | ### Software Repository
20 |
21 | Provide a direct link to the project's repository. This should point to the primary source code repository (e.g., GitHub, GitLab, or another hosting platform).
22 |
23 | ### Security Provider
24 |
25 | Specify whether the project’s primary function is security-related.
26 |
27 | - **Yes:** The project is designed to enhance security in an integrating system.
28 | - **No:** The project is not primarily focused on security but may include security-related components.
29 |
30 | ### Programming Languages
31 |
32 | List the programming languages used in the project. This information helps security reviewers assess potential language-specific risks and dependencies.
33 |
34 | ### Software Bill of Materials (SBOM)
35 |
36 | Include a link to the project's SBOM, which details the versions and relationships of components used in the software, including libraries, packages, and dependencies used. This improves supply chain security and helps identify vulnerabilities in third-party components.
37 |
38 | This link may be templatized, such as `your/releases/{version}.sbom`.
39 |
40 | ### Compliance Certifications
41 |
42 | List any security standards or compliance frameworks the project adheres to (e.g., PCI-DSS, COBIT, ISO, GDPR). If applicable, provide links to compliance documentation or attestations.
43 |
44 | ### Security Documentation
45 |
46 | Provide links to the project's security-related documentation. At a minimum, include a link to security-insights.yml or any other security policies, threat models, or vulnerability management resources.
47 |
48 | As usual, formatting is less important than clear communication with your stakeholders. If this is better broken into a table or sub-sections, feel free to make that decision for your use case.
49 |
50 | **[> Next Up: Project Overview](./project-overview.md)**
51 |
--------------------------------------------------------------------------------
/guidance/level-1/project-overview.md:
--------------------------------------------------------------------------------
1 | # Project Overview
2 |
3 | **[< Previous: Metadata](./metadata.md)**
4 |
5 | This section introduces your project and its purpose, helping reviewers quickly understand its significance and context.
6 |
7 | ## Background
8 |
9 | Provide essential context for reviewers who may not be familiar with your project's domain. Describe the problem your project addresses, the approach it takes to solve it, and the typical users who benefit from it.
10 |
11 | ## Security Goals
12 |
13 | Clearly outline the security guarantees your project aims to provide. These should be specific assurances that define the security scope, such as access control measures, data protection strategies, or authentication mechanisms.
14 |
15 | For example: "Flibble only allows parties with an authorization key to change data it stores."
16 |
17 | ## Security Non-goals
18 |
19 | Define what your project explicitly does not aim to achieve. This helps set realistic expectations for users and reviewers while preventing misunderstandings about security responsibilities.
20 |
21 | For example: "Flibble does not intend to stop a party with a key from storing an arbitrarily large amount of data, possibly incurring financial cost or overwhelming the servers."
22 |
23 | **[> Next Up: System Design](./system-design.md)**
24 |
--------------------------------------------------------------------------------
/guidance/level-1/system-design.md:
--------------------------------------------------------------------------------
1 | # System Design
2 |
3 | **[< Previous: Project Overview](./project-overview.md)**
4 |
5 | This section provides an overview of the system and its distinct parts, helping reviewers understand how different components interact and where security boundaries exist.
6 |
7 | ## System Actors
8 |
9 | [Actors](guidance/background/threat-modeling/actors.md) are the individual components or entities within your system that interact to provide its functionality.
10 |
11 | In this situation, Actors are not equivalent to Threat Actors. Instead of looking at the human element (actors using different parts of the system), this is looking at functional elements that are able to act upon each other.
12 |
13 | Actors should only be considered distinct if they are isolated in some way. For example, if a service has both a database and a front-end API but a compromise in either would affect the other, then they should be treated as a single actor rather than separate entities.
14 |
15 | For each actor, describe:
16 |
17 | - **Its role in the system** (e.g., a client application, an authentication service).
18 | - **How it interacts with other components** (e.g., via API calls, message queues).
19 | - **The isolation mechanisms in place** (e.g., separate authentication domains, network segmentation).
20 |
21 | Capturing all of these mechanisms is crucial, as these can prevent an attacker from moving laterally after a compromise.
22 |
23 | [+ Read More About Actors](../background/threat-modeling/actors.md)
24 |
25 | ## System Actions
26 |
27 | Actions describe the processes and interactions that occur between actors in order to deliver functionality.
28 |
29 | These are the steps that a project performs in order to provide some service
30 | or functionality. These steps are performed by different actors in the system.
31 | Note, that an action need not be overly descriptive at the function call level.
32 | It is sufficient to focus on the security checks performed, use of sensitive
33 | data, and interactions between actors to perform an action.
34 |
35 | For example, the access server receives the client request, checks the format,
36 | validates that the request corresponds to a file the client is authorized to
37 | access, and then returns a token to the client. The client then transmits that
38 | token to the file server, which, after confirming its validity, returns the file.
39 |
40 | If you have a more complex system, you may want to create a chart using a free tool such as
41 | [draw.io](draw.io) or using GitHub flavored markdown you can make a diagram using a
42 | [mermaid chart](https://github.blog/developer-skills/github/include-diagrams-markdown-files-mermaid/).
43 |
44 | [+ Read More About Actions](guidance/background/threat-modeling/actions.md)
45 |
46 | ## Security Functions and Features
47 |
48 | ### Critical Security Components
49 |
50 | This is a listing of critical security components of the project with a brief
51 | description of their importance. It is recommended these be used for threat modeling.
52 | These are considered critical design elements that make the product itself secure and
53 | are not configurable. Projects are encouraged to track these as primary impact items
54 | for changes to the project.
55 |
56 | Each critical component should be listed with a brief description of its importance.
57 |
58 | Examples:
59 |
60 | - **Encryption module** – Encrypts stored and transmitted data.
61 | - **Access control service** – Manages authentication and authorization.
62 |
63 | ### Security-Relevant Features
64 |
65 | This is a listing of security relevant components of the project with
66 | brief description. These are considered important to enhance the overall security of
67 | the project, such as deployment configurations, settings, etc. These should also be
68 | included in threat modeling.
69 |
70 | Each security-relevant component should be documented with a description of its role in security.
71 |
72 | Examples:
73 |
74 | - **Configurable logging settings** – Helps detect and respond to incidents.
75 | - **TLS enforcement** – Ensures secure communication.
76 |
77 | **[> Next Up: Development & Support](./development-and-support.md)**
78 |
--------------------------------------------------------------------------------
/guidance/level-2/getting-started-joint-assessment.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-2/getting-started-joint-assessment.md
--------------------------------------------------------------------------------
/guidance/level-2/roles/lead.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-2/roles/lead.md
--------------------------------------------------------------------------------
/guidance/level-2/roles/maintainer.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-2/roles/maintainer.md
--------------------------------------------------------------------------------
/guidance/level-2/roles/reviewer.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-2/roles/reviewer.md
--------------------------------------------------------------------------------
/guidance/level-3/getting-started-conformity-assessment.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-3/getting-started-conformity-assessment.md
--------------------------------------------------------------------------------
/guidance/level-3/regulatory-considerations.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/guidance/level-3/regulatory-considerations.md
--------------------------------------------------------------------------------
/templates/conformity-assessment.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/templates/conformity-assessment.md
--------------------------------------------------------------------------------
/templates/joint-assessment.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/templates/joint-assessment.md
--------------------------------------------------------------------------------
/templates/self-assessment.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ossf/security-assessments/66a69b4b6475eb7e76ca6688e829b88894f20498/templates/self-assessment.md
--------------------------------------------------------------------------------