├── CITATION.cff ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── README.md └── TEMPLATE.md /CITATION.cff: -------------------------------------------------------------------------------- 1 | cff-version: 1.2.0 2 | title: Issues 3 | message: >- 4 | If you use this work and you want to cite it, 5 | then you can use the metadata from this file. 6 | type: software 7 | authors: 8 | - given-names: Joel Parker 9 | family-names: Henderson 10 | email: joel@joelparkerhenderson.com 11 | affiliation: joelparkerhenderson.com 12 | orcid: 'https://orcid.org/0009-0000-4681-282X' 13 | identifiers: 14 | - type: url 15 | value: 'https://github.com/joelparkerhenderson/issues/' 16 | description: Issues 17 | repository-code: 'https://github.com/joelparkerhenderson/issues/' 18 | abstract: >- 19 | Issues 20 | license: See license file 21 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | 2 | # Contributor Covenant Code of Conduct 3 | 4 | ## Our Pledge 5 | 6 | We as members, contributors, and leaders pledge to make participation in our 7 | community a harassment-free experience for everyone, regardless of age, body 8 | size, visible or invisible disability, ethnicity, sex characteristics, gender 9 | identity and expression, level of experience, education, socio-economic status, 10 | nationality, personal appearance, race, caste, color, religion, or sexual 11 | identity and orientation. 12 | 13 | We pledge to act and interact in ways that contribute to an open, welcoming, 14 | diverse, inclusive, and healthy community. 15 | 16 | ## Our Standards 17 | 18 | Examples of behavior that contributes to a positive environment for our 19 | community include: 20 | 21 | * Demonstrating empathy and kindness toward other people 22 | * Being respectful of differing opinions, viewpoints, and experiences 23 | * Giving and gracefully accepting constructive feedback 24 | * Accepting responsibility and apologizing to those affected by our mistakes, 25 | and learning from the experience 26 | * Focusing on what is best not just for us as individuals, but for the overall 27 | community 28 | 29 | Examples of unacceptable behavior include: 30 | 31 | * The use of sexualized language or imagery, and sexual attention or advances of 32 | any kind 33 | * Trolling, insulting or derogatory comments, and personal or political attacks 34 | * Public or private harassment 35 | * Publishing others' private information, such as a physical or email address, 36 | without their explicit permission 37 | * Other conduct which could reasonably be considered inappropriate in a 38 | professional setting 39 | 40 | ## Enforcement Responsibilities 41 | 42 | Community leaders are responsible for clarifying and enforcing our standards of 43 | acceptable behavior and will take appropriate and fair corrective action in 44 | response to any behavior that they deem inappropriate, threatening, offensive, 45 | or harmful. 46 | 47 | Community leaders have the right and responsibility to remove, edit, or reject 48 | comments, commits, code, wiki edits, issues, and other contributions that are 49 | not aligned to this Code of Conduct, and will communicate reasons for moderation 50 | decisions when appropriate. 51 | 52 | ## Scope 53 | 54 | This Code of Conduct applies within all community spaces, and also applies when 55 | an individual is officially representing the community in public spaces. 56 | Examples of representing our community include using an official e-mail address, 57 | posting via an official social media account, or acting as an appointed 58 | representative at an online or offline event. 59 | 60 | ## Enforcement 61 | 62 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 63 | reported to the community leaders responsible for enforcement at 64 | [INSERT CONTACT METHOD]. 65 | All complaints will be reviewed and investigated promptly and fairly. 66 | 67 | All community leaders are obligated to respect the privacy and security of the 68 | reporter of any incident. 69 | 70 | ## Enforcement Guidelines 71 | 72 | Community leaders will follow these Community Impact Guidelines in determining 73 | the consequences for any action they deem in violation of this Code of Conduct: 74 | 75 | ### 1. Correction 76 | 77 | **Community Impact**: Use of inappropriate language or other behavior deemed 78 | unprofessional or unwelcome in the community. 79 | 80 | **Consequence**: A private, written warning from community leaders, providing 81 | clarity around the nature of the violation and an explanation of why the 82 | behavior was inappropriate. A public apology may be requested. 83 | 84 | ### 2. Warning 85 | 86 | **Community Impact**: A violation through a single incident or series of 87 | actions. 88 | 89 | **Consequence**: A warning with consequences for continued behavior. No 90 | interaction with the people involved, including unsolicited interaction with 91 | those enforcing the Code of Conduct, for a specified period of time. This 92 | includes avoiding interactions in community spaces as well as external channels 93 | like social media. Violating these terms may lead to a temporary or permanent 94 | ban. 95 | 96 | ### 3. Temporary Ban 97 | 98 | **Community Impact**: A serious violation of community standards, including 99 | sustained inappropriate behavior. 100 | 101 | **Consequence**: A temporary ban from any sort of interaction or public 102 | communication with the community for a specified period of time. No public or 103 | private interaction with the people involved, including unsolicited interaction 104 | with those enforcing the Code of Conduct, is allowed during this period. 105 | Violating these terms may lead to a permanent ban. 106 | 107 | ### 4. Permanent Ban 108 | 109 | **Community Impact**: Demonstrating a pattern of violation of community 110 | standards, including sustained inappropriate behavior, harassment of an 111 | individual, or aggression toward or disparagement of classes of individuals. 112 | 113 | **Consequence**: A permanent ban from any sort of public interaction within the 114 | community. 115 | 116 | ## Attribution 117 | 118 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 119 | version 2.1, available at 120 | [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. 121 | 122 | Community Impact Guidelines were inspired by 123 | [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. 124 | 125 | For answers to common questions about this code of conduct, see the FAQ at 126 | [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at 127 | [https://www.contributor-covenant.org/translations][translations]. 128 | 129 | [homepage]: https://www.contributor-covenant.org 130 | [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html 131 | [Mozilla CoC]: https://github.com/mozilla/diversity 132 | [FAQ]: https://www.contributor-covenant.org/faq 133 | [translations]: https://www.contributor-covenant.org/translations 134 | 135 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | Contributing is great. Thank you. 4 | 5 | Style guide: 6 | 7 | * Headlines 8 | * We prefer sentence case over word-cap case. 9 | * Right: "Foo goo hoo". 10 | * Wrong: "Foo Goo Hoo". 11 | * Reason 1: it's easier for more cultures, easier to excerpt, and easier to transition among styles. 12 | * Reason 2: it matches what big popular sites are doing for tech headlines; see Google blogs for examples. 13 | * Tables 14 | * We prefer to implement tables using HTML over Markdown. 15 | * Reason: We aim for the [CommonMark specification](http://spec.commonmark.org/) and [cmark parser](https://github.com/commonmark/cmark), which are widespread industry standards, and these do not provide tables. We hope that CommonMark and cmark will add tables in the future. 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Issues 2 | 3 | Issues come in many flavors, for example feature requests, bug reports, customer complaints, security alerts, team retrospectives, etc.; this page describes how our team uses issues, and how we communicate about them. 4 | 5 | - [What is an issue?](#what-is-an-issue) 6 | - [Public issue or private issue?](#public-issue-or-private-issue) 7 | - [For a public issue](#for-a-public-issue) 8 | - [For a private issue](#for-a-private-issue) 9 | - [Score](#score) 10 | - [Score by priority rank](#score-by-priority-rank) 11 | - [Score by severity of impact](#score-by-severity-of-impact) 12 | - [Score by magnitude of damage](#score-by-magnitude-of-damage) 13 | - [Score by size name](#score-by-size-name) 14 | - [Score by level of danger](#score-by-level-of-danger) 15 | - [Score by MoSCoW requirement](#score-by-moscow-requirement) 16 | - [Score by frequency rate](#score-by-frequency-rate) 17 | - [Score by combination](#score-by-combination) 18 | - [Score discussion](#score-discussion) 19 | - [Issue template](#issue-template) 20 | - [Postmortem triggers](#postmortem-triggers) 21 | - [Blameless postmortems](#blameless-postmortems) 22 | 23 | 24 | ## What is an issue? 25 | 26 | For our teams the word "issue" is a generic term such as: 27 | 28 | Examples: 29 | 30 | * A feature request 31 | * A bug report 32 | * A customer complaint 33 | * A security alert 34 | * A team retrospective 35 | 36 | 37 | ## Public issue or private issue? 38 | 39 | For many of our projects we create a public issue and a private issue. 40 | 41 | The public issue is external-facing intended for our users, customers, promoters, etc. 42 | 43 | The private issue is internal-facing intended for our employees, contractors, partners, etc. 44 | 45 | 46 | ### For a public issue 47 | 48 | Emphasize summarization. 49 | 50 | Highlight actionable information. 51 | 52 | Exclude confidential information. 53 | 54 | 55 | ### For a private issue 56 | 57 | Emphasize thoroughness. 58 | 59 | Highlight exploratory information because this helps discover patterns across issues. 60 | 61 | Include confidential information as approriate. 62 | 63 | 64 | ## Score 65 | 66 | We score each issue in ways that help us compare them, to know what we want to work on. There are a variety of ways to score, and here are some we've seen work well in practice. 67 | 68 | 69 | ### Score by priority rank 70 | 71 | Example: Priority 1 (do first), Priority 2 (do second), Priority 3 (do third), etc. 72 | 73 | Analogy: a to-do list, where Priority 1 is your first priority. 74 | 75 | Pros: easy to understand what the team will work on and in what order; compatible with many bug trackers, todo list apps, and task management tools. 76 | 77 | Misguided: some teams use Priority 0 (P0) to mean emergency alert or release blocker. 78 | 79 | 80 | ### Score by severity of impact 81 | 82 | Example: Severity 1 (minimal impact) to 5 (catastrophic impact). 83 | 84 | Analogy: the Saffir-Simpson Hurricane scale of 1 (minimal), 2 (moderate), 3 (extensive), 4 (extreme), 5 (catastrophic). 85 | 86 | Benefits: easy to understand in terms of business impact; can use real world analogies; good for color coding from green to red; different evaluators can assess severity in each of their own perspectives, independent of what to work on first. 87 | 88 | Misguided: some teams reverse the scale and use "Severity 0" (catastrophic) to 5 (mimimal). We do not recommend this because it's backwards. 89 | 90 | 91 | ### Score by magnitude of damage 92 | 93 | Example: Magnitude 1 (minor damage) to 10 (catastrophic damage). 94 | 95 | Analogy: the Richter earthquake scale from 1 (minor damage) to 10 (permanent total destruction). 96 | 97 | Benefits: easy to understand in terms of customer impact; can use real world analogies; good for brightness coding from light to dark; different evaluators can assess severity in each of their own perspectives, independent of what to work on first. 98 | 99 | 100 | ### Score by size name 101 | 102 | Example: increasing size names "Small", "Medium", "Large". 103 | 104 | Analogy: clothing sizes. 105 | 106 | Benefits: easy to understand approximately how much work needs to be done. 107 | 108 | 109 | ### Score by level of danger 110 | 111 | Example: international regulations define five levels of failure conditions, categorized by their effects on the aircraft, crew, and passengers. 112 | 113 | Level A – Catastrophic: Failure may cause multiple fatalities, usually with loss of the airplane. 114 | 115 | Level B – Hazardous: Failure has a large negative impact on safety or performance, reduces the ability of the crew to operate the aircraft due to physical distress or a higher workload, or causes serious or fatal injuries among the passengers. 116 | 117 | Level C – Major: Failure significantly reduces the safety margin or significantly increases crew workload. May result in passenger discomfort (or even minor injuries). 118 | 119 | Level D – Minor: Failure slightly reduces the safety margin or slightly increases crew workload. Examples might include causing passenger inconvenience or a routine flight plan change. 120 | 121 | Level E – No Effect: Failure has no impact on safety, aircraft operation, or crew workload. 122 | 123 | 124 | ### Score by MoSCoW requirement 125 | 126 | Example: MoSCoW is a mnemonic for "must", "should", "could", "won't". A feature is "must have", "should have", "could have", "won't have" (or "would have"). 127 | 128 | Analogy: any planning conversation when a person says "We must do this" or another person says "This is a nice to have". 129 | 130 | Benefits: the plain English wording of the categories is valuable in getting stakeholders to talk about issues; widespread use among user interaction experts. 131 | 132 | Note: We prefer to use the word "would" (instead of "won't") because in our experience with stakeholders, "would" shows that an issue is still possible to be included in the future if something changes; we say "would if X". 133 | 134 | 135 | ### Score by frequency rate 136 | 137 | Example: "Frequency 1%" means 1% of use is affected, "Frequency 100%" means 100% of use is affected. 138 | 139 | Analogy: the rate at which something occurs or is repeated over a particular period of time or in a given sample. 140 | 141 | Benefits: measures how often the issue happens; can be a rate phrase such as "20 times per day"; can be a summary word such as "always", "often", "sometimes", "seldom", "never"; can be a percentage such as "80% of use is affected". 142 | 143 | 144 | ### Score by combination 145 | 146 | Example: score an issue by a combination of priority, severity, magnitude, size, MoSCoW, frequency. 147 | 148 | Suppose an important customer is coming into the office in an hour to sign a contract, and the sales team finds a misspelling in the customer's company name on the website. 149 | 150 | * Sales team says Priority 1 meaning work on it first. 151 | 152 | * Product team says Severity 1 (minimal impact) because a typo is trivial and doesn't affect others. 153 | 154 | * Marketing team says Magnitude 3 (some damage) because the typo ended up on presenation collateral. 155 | 156 | * Project manager says size "small" because the work estimate is tiny. 157 | 158 | * Design team says MoSCoW "must" because it must be fixed. 159 | 160 | * Quality team says Frequency 2% because inspection discovered typos in 2% of customer names. 161 | 162 | 163 | ## Score discussion 164 | 165 | This section has score discussion notes. The quotes are excerpted, synthesized, and sometimes anonymized. 166 | 167 | "Usually the other orthogonal assessment in addition to severity is frequency. If the bug is unlikely to be seen during regular use, then even if severity is high, the priority might be lowered. This is usually how risk is managed in my experience." 168 | 169 | "A developer or tester might be good at specifying how severe a bug is, but doesn't know if everyone hits the issue or just some users hit the issue. The frequency is a different dimension. The severity can then be multiplied by frequency to calculate the priority." 170 | 171 | "I think the formula should be: severity * frequency - ease of workaround = priority. So if any of those measures change (e.g. an easy workaround is discovered, or it's determined that the web page that is crashing is also almost never viewed) then the priority should be adjusted. Having just severity without a measure of 'how many people does this impact?' and 'just how badly does this impact them?' seems like it's missing part of the picture." 172 | 173 | "The QA engineer sets the severity during the initial investigation based on technical criteria. This is then one of the data points that the product manager uses during triage to set the priority, which is the controlling value from that point in the process onward." 174 | 175 | "One user sometimes suffers a total crash, which then loses all their work, which makes them angry. The user would score the issue as highest severity. But if it's just one user experiencing the issue, and it's intermittent, and the user has a workaround such as saving more often, then the product manager would score the issue as low priority." 176 | 177 | "Severity is how the reporter sees the problem: if it interferes with their particular use case, it's of the highest severity. Priority is how the project management team sees the bug: highest priority bugs are there because of the most valuable vocal complainers such as high-paying customers, an inconvenienced CEO, etc. Don't use the severity of the bug to rank the priority, because they're not strongly correlated." 178 | 179 | "My experience with priority and severity is that, while the distinction may exist academically, the reality is that most people don't understand it. The result being that the words are so frequently misused that, in practice, they are indistinguishable dimensions." 180 | 181 | "Google's internal bug tracker has both priority and severity. P0 S0 is most urgent. P2 S2 is standard. P4 S4 is least urgent. It's kind of a running joke that severity is meaningless (because it isn't meaningfully different from priority). On my team for example we leave it at its default value and ignore it completely." 182 | 183 | "We use a single priority field. The tester uses a heuristic to assign an initial priority (e.g., crashes are P1, cosmetic are P5). The developer uses this to prioritize which bugs to triage first, and when they've determined a new priority based on customer experience combined with app behaviour, they replace the old priority score with the new priority score. If we really needed to go back and check what the tester assigned, then we use the "history" or "revision" feature in our bug tracking app." 184 | 185 | 186 | ## Issue template 187 | 188 | An issue template can help a team cover important areas efficiently and succinctly. 189 | 190 | Our issue template uses: 191 | 192 | * Chief Complaint (CC): summarize the problem as reported by the affected person. 193 | 194 | * Participants (Pt): who is involved, such as users, employees, partners, specific people, etc. 195 | 196 | * Symptoms (Sx): what is going wrong on the surface, such as the users' perspectives, or triggers, or alerts, etc. 197 | 198 | * Fractures (Fx): what is broken, such as a failed part, or crashed application, or stuck process, etc. 199 | 200 | * History (Hx): relevant background information, such as prior similar issues, or reports, or references, etc. 201 | 202 | * Investigations (Ix): what we're doing to research the issue, such as the steps we're taking, or tests we're trying, etc. 203 | 204 | * Diagnosis (Dx): what is going wrong under the surface, such as the root causes, or cascading causes, etc. 205 | 206 | * Treatments (Tx): what we're doing to make it better, such as action items, to do lists, mitigations, etc. 207 | 208 | * Prognosis (Px): what is the prediction, such as a forecast, potential outcomes, changes in effects, etc. 209 | 210 | Our issue template is this file: [TEMPLATE.md](TEMPLATE.md) 211 | 212 | 213 | ## Postmortem triggers 214 | 215 | Postmortem triggers can make it easy and fast for a team to know when to do a postmortem writeup. 216 | 217 | Postmortem triggers can include: 218 | 219 | * Any user-visible issues, such as unexpected outages or errors. 220 | 221 | * Any on-demand intervention, such as by engineers or executives. 222 | 223 | * Any manual incident discovery, because this shows we need monitoring. 224 | 225 | * Any request by a stakeholder for a postmortem, or review, or mitigation. 226 | 227 | 228 | ## Blameless postmortems 229 | 230 | Blameless postmortems focus on the incident's symptoms, causes, and treatments, rather than focus on blaming a person or a group of people. 231 | 232 | Blameless postmortems start by affirming that everyone has good intentions, and does their best they can at the time, with the information they have at the time. 233 | 234 | 235 | # Posts about issues, incidents, postmortems, etc. 236 | 237 | * [Post-Mortem Meeting Template and Tips by Brett Harned at TeamGannt on 2017-09-05](https://www.teamgantt.com/blog/post-mortem-meeting-template-and-tips) 238 | -------------------------------------------------------------------------------- /TEMPLATE.md: -------------------------------------------------------------------------------- 1 | # Issue Title 2 | 3 | ## Score 4 | 5 | Any score information, such as priority, severity, magnitude, category. 6 | 7 | 8 | ## Chief Complaint (CC) 9 | 10 | Summarize the problem as reported by the affected person. 11 | 12 | 13 | ## Participants (Pt) 14 | 15 | Who is involved, such as the discoverer of the issue, affected users, employees, partners, specific people, who to inform about the progress, etc. 16 | 17 | 18 | ## Symptoms (Sx) 19 | 20 | What is going wrong on the surface, such as the users' perspectives, or triggers, or alerts, etc. 21 | 22 | 23 | ## Fractures (Fx) 24 | 25 | What's broken, such as a failed part, or crashed application, or stuck process, etc. 26 | 27 | 28 | ## History (Hx) 29 | 30 | Relevant background information, such as prior similar issues, or reports, or references, etc. 31 | 32 | 33 | ## Investigations (Ix) 34 | 35 | What we're doing to research the issue, such as the steps we're taking, or tests we're trying, etc. 36 | 37 | 38 | ## Diagnosis (Dx) 39 | 40 | What is going wrong under the surface, such as any root causes, or cascading causes, etc. 41 | 42 | 43 | ## Treatments (Tx) 44 | 45 | What we're doing to make it better, such as action items, to do lists, mitigations, etc. 46 | 47 | 48 | ## Prognosis (Px) 49 | 50 | What is the prediction, such as a forecast, potential outcomes, changes in effects, etc. 51 | --------------------------------------------------------------------------------