├── .gitignore ├── LICENSE.txt ├── README.md ├── capabilities-maturity-assessment.md ├── capabilities ├── code-maintainability.md ├── continuous-delivery.md ├── continuous-integration.md ├── customer-feedback.md ├── database-change-management.md ├── deployment-automation.md ├── documentation-quality.md ├── empowering-teams-to-choose-tools.md ├── flexible-infrastructure.md ├── generative-organizational-culture.md ├── job-satisfaction.md ├── learning-culture.md ├── loosely-coupled-teams.md ├── monitoring-and-observability.md ├── monitoring-systems-to-inform-business-decisions.md ├── pervasive-security.md ├── proactive-failure-notification.md ├── streamline-change-approval.md ├── team-experimentation.md ├── test-automation.md ├── test-data-management.md ├── transformational-leadership.md ├── trunk-based-development.md ├── version-control.md ├── visibility-of-work-in-the-value-stream.md ├── visual-management.md ├── well-being.md ├── work-in-process-limits.md └── working-in-small-batches.md ├── contributions.md ├── practices ├── address-resource-constraints-incrementally.md ├── automate-coding-standards.md ├── automate-database-migrations.md ├── automate-deployment.md ├── automate-infrastructure-management.md ├── automate-test-coverage-checks.md ├── backup-data-daily.md ├── build-a-single-binary.md ├── build-consistent-testing-strategy.md ├── check-documentation-consistency.md ├── clean-git-history.md ├── clean-tests.md ├── conduct-code-reviews.md ├── conduct-incident-reviews.md ├── conduct-retrospective-meetings.md ├── create-and-manage-ephemeral-environments.md ├── decouple-from-third-parties.md ├── design-for-eventual-consistency.md ├── follow-functional-core-imperative-shell.md ├── hold-environment-information-separately.md ├── host-a-roundtable-discussion.md ├── host-crucial-conversation.md ├── implement-a-documentation-search-engine.md ├── implement-actor-based-model.md ├── implement-anti-entropy-patterns.md ├── implement-bulkheads.md ├── implement-cascading-failure-mitigation-strategies.md ├── implement-circuit-breaker-pattern.md ├── implement-composable-design.md ├── implement-distributed-tracing.md ├── implement-domain-driven-design.md ├── implement-elastic-systems.md ├── implement-event-driven-architecture.md ├── implement-feature-flags.md ├── implement-form-object-pattern.md ├── implement-graceful-degradation-and-fallbacks.md ├── implement-health-checks.md ├── implement-load-balancing.md ├── implement-logging.md ├── implement-message-driven-systems.md ├── implement-microservice-architecture.md ├── implement-monitoring-metrics.md ├── implement-plugin-architecture.md ├── implement-repository-pattern.md ├── implement-stability-patterns.md ├── implement-tdd.md ├── implement-timeouts-and-retries.md ├── incremental-development.md ├── lead-a-demonstration.md ├── optimize-data-structures.md ├── perform-static-code-analysis.md ├── plan-capacity.md ├── prioritize-design-separation.md ├── provide-dev-coaching.md ├── pursue-continuous-personal-development.md ├── reduce-coupling-between-abstractions.md ├── refactor.md ├── refactoring-to-clean-architecture.md ├── reuse-code-mindfully.md ├── run-automated-tests-in-ci-pipeline.md ├── run-daily-standups.md ├── run-pair-programming-sessions.md ├── scan-vulnerabilities.md ├── schedule-regular-documentation-audits.md ├── segregate-sensitive-and-insensitive-data.md ├── separate-config-from-code.md ├── separate-credentials-from-code.md ├── share-knowledge.md ├── test-for-fault-tolerance.md ├── understand-your-system-requirements.md ├── use-documentation-auto-generation-tooling.md ├── use-spin-to-unearth-problems-and-solutions.md ├── use-templates-for-new-projects.md ├── use-test-doubles.md ├── version-dependencies.md ├── write-characterization-testing-for-legacy-code.md ├── write-code-in-functional-programming-style.md ├── write-code-with-single-responsibility.md ├── write-ephemeral-model-based-tests.md ├── write-invest-back-log-items.md └── write-performance-tests.md ├── resources ├── apprenticeship-patterns.md ├── boundaries.md ├── clean-architecture.md ├── crucial-conversations.md ├── debugging-with-the-scientific-method.md ├── doubleloop-learning-review.md ├── fifty-quick-ideas-to-improve-your-user-stories.md ├── flow-state.md ├── hacking-challenge-at-defcon.md ├── how-to-speak.md ├── http1-vs-http2-vs-http3.md ├── is-domain-driven-design-overrated.md ├── learning-domain-driven-design.md ├── maker-time-vs-manager-time.md ├── owasp-risk-rating-methodology.md ├── radical-candor.md ├── stride-threat-modeling.md ├── talk-less-listen-more.md ├── the-clean-coder.md ├── the-five-dysfunctions-of-a-team.md ├── the-lean-startup.md ├── the-one-minute-manager.md ├── the-power-of-vulnerability.md ├── the-reasonable-expectations-of-your-new-cto.md ├── what-is-dns.md ├── what-is-your-working-genius.md ├── winnable-and-unwinnable-games.md └── zebras-all-the-way-down.md └── templates ├── new-practice.md └── new-resource.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. 122 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Open Practices 2 | 3 | > Practice - An actionable pattern, technique, or process employed by software professionals. 4 | 5 | This repository is an opinionated list of practices that high-performing software development teams can follow. We've organized these [practices](/practices/) based on [DORA Capabilities](https://dora.dev/capabilities/) to piggyback off of their exhaustive research over the past decade. By focusing on practices that support these research-backed capabilities, we aim to give readers many actionable opportunities to grow their skills and fine-tune their processes in areas that are likely to positively impact the organizations they work with. 6 | 7 | Not every practice we include will be beneficial in every situation. Each unique situation is rife with nuance. Our curated list is designed to include practices that typically work and are in support a given DORA Capability. Our goal is to ensure that teams always have fresh, pragmatic, and actionable ideas on how they can improve. 8 | 9 | At [Pragmint](https://pragmint.com/), we rely on this resource to onboard new team members and focus our [Co-Dev Coaching](https://www.pragmint.com/insight/what-is-co-dev-coaching) efforts. 10 | 11 | ## Table Of Contents 12 | 13 | ### Capabilities that enable a Climate for Learning 14 | 15 | - [Code Maintainability](/capabilities/code-maintainability.md) 16 | - [Documentation Quality](/capabilities/documentation-quality.md) 17 | - [Empowering Teams To Choose Tools](/capabilities/empowering-teams-to-choose-tools.md) 18 | - [Generative Organizational Culture](/capabilities/generative-organizational-culture.md) 19 | - [Job Satisfaction](/capabilities/job-satisfaction.md) 20 | - [Learning Culture](/capabilities/learning-culture.md) 21 | - [Team Experimentation](/capabilities/team-experimentation.md) 22 | - [Transformational Leadership](/capabilities/transformational-leadership.md) 23 | - [Well-Being](/capabilities/well-being.md) 24 | 25 | ### Capabilities that enable Fast Flow 26 | 27 | - [Continuous Delivery](/capabilities/continuous-delivery.md) 28 | - [Database Change Management](/capabilities/database-change-management.md) 29 | - [Deployment Automation](/capabilities/deployment-automation.md) 30 | - [Flexible Infrastructure](/capabilities/flexible-infrastructure.md) 31 | - [Loosely Coupled Teams](/capabilities/loosely-coupled-teams.md) 32 | - [Streamline Change Approval](/capabilities/streamline-change-approval.md) 33 | - [Trunk-Based Development](/capabilities/trunk-based-development.md) 34 | - [Version Control](/capabilities/version-control.md) 35 | - [Visual Management](/capabilities/visual-management.md) 36 | - [Work in Process Limits](/capabilities/work-in-process-limits.md) 37 | - [Working in Small Batches](/capabilities/working-in-small-batches.md) 38 | 39 | ### Capabilities that enable Fast Feedback 40 | 41 | - [Continuous Integration](/capabilities/continuous-integration.md) 42 | - [Customer Feedback](/capabilities/customer-feedback.md) 43 | - [Monitoring and Observability](/capabilities/monitoring-and-observability.md) 44 | - [Monitoring Systems to Inform Business Decisions](/capabilities/monitoring-systems-to-inform-business-decisions.md) 45 | - [Pervasive Security](/capabilities/pervasive-security.md) 46 | - [Proactive Failure Notification](/capabilities/proactive-failure-notification.md) 47 | - [Test Automation](/capabilities/test-automation.md) 48 | - [Test Data Management](/capabilities/test-data-management.md) 49 | - [Visibility of Work in the Value Stream](/capabilities/visibility-of-work-in-the-value-stream.md) 50 | 51 | ## Capabilities Maturity Assessment 52 | 53 | Teams can take [this assessment](/capabilities-maturity-assessment.md) to identify areas where there are significant gaps in capability adoption. 54 | 55 | ## Important Note 56 | 57 | Reading and learning alone won't create lasting change. It's important to experiment with each practice. Each team has a different mix of skills, experiences, and constraints. So, a one-size-fits-all approach tends to feel heavyweight. It's helpful to set time aside to earnestly experiment with new practices, then keep what works and throw out what doesn't. Understand that just because a practice doesn't work for one team doesn't mean it has no value for other teams. Our goal with this repository is to list practices that tend to work for most teams. 58 | 59 | If you're a tech leader, exposing your teams to this resource may be a helpful first step for them. However, certain teams may be less experienced or less willing to experiment, leading to a lack of lasting change. In those cases, you may want to consider starting a [Co-Dev Coaching](https://www.pragmint.com/insight/what-is-co-dev-coaching) practice of your own. [Pragmint](https://pragmint.com/) can help. 60 | 61 | ## Contributing 62 | 63 | Our repository is always evolving. You can add to it by reviewing our [contributors guide](contributions.md) then [raising an issue](https://github.com/pragmint/open-practices/issues) or [submitting a pull request](https://github.com/pragmint/open-practices/pulls). Given this repository is meant to represent the opinions of [Pragmint](https://pragmint.com/), our maintainers reserve the right to approve or reject any and all suggestions. However, we welcome contributions as they represent opportunities to broaden our horizons and interact with the broader community. Any contributions to this repository are subject to the [Creative Commons License](/LICENSE.txt) so that anyone in the community can benefit from the ideas contained in this repository. 64 | -------------------------------------------------------------------------------- /capabilities/continuous-integration.md: -------------------------------------------------------------------------------- 1 | # [Continuous Integration](https://dora.dev/devops-capabilities/technical/continuous-integration/) 2 | 3 | Under Construction 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /capabilities/customer-feedback.md: -------------------------------------------------------------------------------- 1 | # [Customer Feedback](https://dora.dev/devops-capabilities/process/customer-feedback/) 2 | 3 | Under Construction 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /capabilities/database-change-management.md: -------------------------------------------------------------------------------- 1 | # [Database Change Management](https://dora.dev/capabilities/database-change-management/) 2 | 3 | The Database Change Management capability involves handling database updates with the same rigor as application code, using version control, automation, and collaboration. A team that is mature in this capability enjoys low-risk and zero-downtime deployments. 4 | 5 | DBAs or specialized teams often manage databases. So, for application development teams looking to practice database change management, an effective strategy involves creating an automated self-service method for the team to apply changes and pulling in specialists when delicate or complicated changes need to be applied. 6 | 7 | ## Nuances 8 | 9 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this capability. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the capability with your teams. 10 | 11 | ### Risks With Shared Databases 12 | 13 | When multiple systems rely on the same database, even small updates risk breaking functionality across applications. A single update may unintentionally break functionality across applications, causing disruptions and delays. Systems should be set up in such a way to minimize the number of applications that rely on a shared database. When that situation is unavoidable, teams should invest in techniques that reduce the cost of errors being introduced. This includes techniques like automated testing, version control, anomaly detection, and auto-rollbacks. 14 | 15 | ### Shaping DBA Involvement for Maximum Impact 16 | 17 | DBAs are vital to database change management but often support multiple teams, limiting their support capacity. To reduce reliance on their time, implement automated self-service workflows with built-in guardrails for routine tasks. This allows DBAs to be pulled in for manual, complex, and performance-critical cases where their expertise is essential. In these cases, make sure to involve DBAs early in the process to catch smaller issues before they snowball into bigger ones. 18 | 19 | ### Integrated Validation 20 | 21 | Database changes affect both data integrity and application behavior, requiring system-wide validation. To ensure reliability, create production-like testing environments, automate regression tests, and validate functional and performance outcomes. This holistic approach minimizes risks and uncovers hidden issues before deployment. 22 | 23 | ## Assessment 24 | 25 | To assess how mature your team or organization is in this capability, complete this short exercise. 26 | 27 | Consider the descriptions below and score yourself on the Database Change Management capability. Generally, score a 1 if database changes are manual and error-prone, a 2 if they are partially automated and you feel there is a lot of room for improvement, a 3 if they are mostly automated and you feel there is some room for improvement, and a 4 if they are fully automated and your team is exemplary in the area of Database Change Management. 28 | 29 | Don't worry if the description doesn't exactly match your situation. These descriptions are meant to be examples of situations that would qualify for the associated score. 30 | 31 | 1. Manual and Error-Prone: Database changes are made manually, with a high risk of errors. Deployments are slow, sometimes taking hours to complete, and sometimes requiring downtime. 32 | 2. Partially Automated: Some database changes are automated, but many changes require manual intervention and/or testing to complete. 33 | 3. Mostly Automated: Most database changes are made using a fully automated process, with some manual review and/or testing. Changes are generally deployed quickly, taking minutes. Reliability is fairly good, with few failed changes. 34 | 4. Fully Automated and Zero-Downtime: All database changes are made using a fully automated process, with no manual intervention or approval required. Changes are deployed rapidly, taking seconds or minutes, and the process is highly reliable with zero downtime to dependent applications. When failures are introduced, they’re automatically and safely reverted. 35 | 36 | The number you selected represents your overall score for this capability. If you feel like your team or organization fits somewhere in between two scores, it's okay to use a decimal. For example, if you think database changes in your team or organization are somewhere between partially automated and mostly automated, then you would score a 2.5. 37 | 38 | Generally, an overall score equal to or less than 3 means you'll likely gain a lot of value from experimenting with some of the supporting practices listed here. An overall score higher than 3 generally means you and your team are largely proficient, or well on your way to becoming proficient, in the area of Database Change Management; you would likely benefit from evaluating your scores in other capabilities. 39 | 40 | ## Supporting Practices 41 | 42 | The following is a curated list of supporting practices to consider when looking to improve your team's Database Change Management capability. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 43 | 44 | ### [Automate Database Migrations](/practices/automate-database-migrations.md) 45 | 46 | Implementing automated database migrations ensures database schema changes are consistently applied. Typically, this practice works in concert with code changes, ensuring the whole system is integrated, tested, and deployed in a unified manner. 47 | 48 | ### Store Database Changes in Version Control 49 | 50 | Keep all database schema changes as scripts in version control alongside application code. This practice ensures that changes are tracked, auditable, and integrated into automated deployment pipelines. It also allows teams to create production-like databases in their local environments. 51 | 52 | ### [Create and Manage Ephemeral Environments](/practices/create-and-manage-ephemeral-environments.md) 53 | 54 | Creating and managing ephemeral environments provides flexible, production-like testing environments that can be spun up on-demand. These temporary environments reduce conflicts, promote early bug detection, and improve reproducibility. Integrated into CI/CD pipelines, they offer continuous and immediate feedback on code changes, whether those changes are made to the application, database, infrastructure, or some combination of the three. 55 | 56 | ### Follow the Parallel Change Pattern 57 | 58 | When you rename a database column, you're breaking any applications that depend on the column having the original name. By following the parallel change pattern, you make these otherwise breaking changes in a phased approach that ensures no applications depend on the outdated schema by the time the outdated bits are removed. Implementing this technique allows teams to deploy database schema changes without disrupting any services, improving availability. 59 | 60 | ### Draft an Annual DBA-Dev Team Working Agreement 61 | 62 | Every year, get each application development team together with the DBA team to cover each team's roadmap of changes. Then, help the teams draft a working agreement that outlines how and when the teams will collaborate with each other. Doing so will ensure DBAs are kept in the loop of big upcoming changes and application developers are getting the support they need from the DBA team. 63 | 64 | ## Adjacent Capabilities 65 | 66 | The following capabilities will be valuable for you and your team to explore, as they are either: 67 | 68 | - Related (they cover similar territory to Database Change Management) 69 | - Upstream (they are a pre-requisite for Database Change Management) 70 | - Downstream (Database Change Management is a pre-requisite for them) 71 | 72 | ### [Version Control](/capabilities/version-control.md) - Upstream 73 | 74 | Version Control is fundamental to Database Change Management. Storing database changes in version control enables teams to effectively track, review, and coordinate the changes. It ensures that database modifications are synchronized with application code, facilitating consistent and reliable deployments. 75 | 76 | ### [Continuous Delivery](/capabilities/continuous-delivery.md) - Downstream 77 | 78 | Continuous Delivery involves automating the release process to enable frequent and reliable deployments. With an effective Database Change Management capability in place, you can be assured that schema changes are versioned, tested, and deployed alongside application updates, reducing bottlenecks in the delivery pipeline. This directly supports the Continuous Delivery capability. 79 | 80 | ### [Deployment Automation](/capabilities/deployment-automation.md) - Related 81 | 82 | Deployment automation minimizes manual steps during releases, reducing errors and speeding up the deployment process. A key aspect of automating deployment is automating database migrations. By automating database changes, teams ensure consistent and repeatable deployments across environments. 83 | -------------------------------------------------------------------------------- /capabilities/documentation-quality.md: -------------------------------------------------------------------------------- 1 | # [Documentation Quality](https://dora.dev/capabilities/documentation-quality/) 2 | 3 | _Documentation_ describes a range of internal decisions, processes, or policies. Software developers typically create and maintain documentation on technical design details, code changes, product requirements, common pitfalls in the tech stack, service interactions, and testing plans. 4 | 5 | _Documentation quality_ refers to the accuracy, clarity, completeness, and accessibility of internal documentation. Excellent documentation enables teams to effectively collaborate, make informed decisions, and deliver high-quality software quickly and reliably. 6 | 7 | ## Nuances 8 | 9 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this capability. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the capability with your teams. 10 | 11 | ### Importance of Clear and Findable Documentation 12 | 13 | It's not enough for documentation to exist; it must also be clear and findable. Clear and findable documentation is essential for effective knowledge sharing and collaboration within teams. Without it, teams may struggle to understand existing systems, leading to a decrease in development speed and reliability. Misleading documentation can be worse than missing documentation. 14 | 15 | ### Reliability and Maintenance of Documentation 16 | 17 | Once documentation is created, the work is not done. Developers must make a deliberate effort to review and refine documentation regularly. The cadence for auditing documentation will look different for different teams. For some teams, reality becomes out of step with documentation every day. For other teams, reality becomes out of step with documentation every month. The goal is to ensure documentation remains reliable and accurate without spinning your wheels. With practice, teams can strike a balance between writing quick, low-level documentation that points people in the right direction and writing more abstract or complex documentation that's unlikely to change much over time. 18 | 19 | ## Assessment 20 | 21 | To assess how mature your team or organization is in this capability, complete this short exercise. 22 | 23 | Consider the descriptions below and score yourself on the Documentation Quality capability. Generally, score a 1 if Documentation Quality is minimal or completely lacking from your team or organization, a 2 if it is basic and you feel there is a lot of room for improvement, a 3 if it is good and you feel there is some room for improvement, and a 4 if your team or organization is exemplary in the area of Documentation Quality. 24 | 25 | Don't worry if the description doesn't exactly match your situation. These descriptions are meant to be examples of situations that would qualify for the associated score. 26 | 27 | 1. **Minimal:** The technical documentation is often outdated, incomplete, or inaccurate, making it difficult to rely on when working with the services or applications. It's hard to find what is needed, and others are often asked for help. 28 | 2. **Basic:** The technical documentation is somewhat reliable, but it's not always easy to find what is needed. Updates are sporadic, and multiple sources must be dug through to get the required information. In times of crisis, the documentation might be glanced at, but it's not always trusted. 29 | 3. **Good:** The technical documentation is generally reliable, and what is needed can usually be found with some effort. Updates are made regularly, but not always immediately. The documentation is used to help troubleshoot issues, but clarification from others might still be needed. 30 | 4. **Excellent:** The technical documentation is comprehensive, accurate, and up-to-date. What is needed can easily be found, and the documentation is relied on heavily when working with the services or applications. When issues arise, the documentation is confidently reached for to help troubleshoot and resolve problems. 31 | 32 | The number you selected represents your overall score for this capability. If you feel like your team or organization fits somewhere in between two scores, it's okay to use a decimal. For example, if you think your team or organization has somewhere between basic and good Documentation Quality, you would score a 2.5. 33 | 34 | Generally, an overall score equal to or less than 3 means you'll likely gain a lot of value from experimenting with some of the supporting practices listed here. An overall score higher than 3 generally means you and your team are largely proficient, or well on your way to becoming proficient, in the area of Documentation Quality; you would likely benefit from evaluating your scores in other capabilities. 35 | 36 | ## Supporting Practices 37 | 38 | The following is a curated list of supporting practices to consider when looking to improve your team's Documentation Quality capability. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 39 | 40 | ### [Use Documentation Auto-Generation Tooling](/practices/use-documentation-auto-generation-tooling.md) 41 | 42 | Automate the creation of documentation using tools that generate comprehensive and up-to-date documentation directly from the source code or configuration files. This practice ensures that documentation stays in sync with the codebase, reducing the manual effort required to maintain it, and minimizing the risk of outdated or incomplete information. Tools like [Swagger](https://github.com/swagger-api) create executable documentation, while tools like [RepoAgent](https://github.com/OpenBMB/RepoAgent) use LLMs to automatically keep the documentation in sync as the codebase changes. 43 | 44 | ### [Implement a Documentation Search Engine](/practices/implement-a-documentation-search-engine.md) 45 | 46 | Implementing a documentation search engine enables team members to quickly find relevant documentation, reducing the time spent searching for information and increasing productivity. For example, [Obsidian](https://obsidian.md/) links documents together, creating a map of related ideas. With [Notion](https://www.notion.so/), users can tag and filter documents for quick organizing and categorizing. And [Confluence](https://www.atlassian.com/software/confluence) gives teams secure, shared access to large volumes of organized documentation. 47 | 48 | ### [Schedule Regular Documentation Audits](/practices/schedule-regular-documentation-audits.md) 49 | 50 | As systems and processes evolve, it's important to spend some time auditing your existing documentation to ensure it's accurate. Scheduling regular documentation audits ensures that documentation remains up-to-date and consistent with the codebase it describes. This practice better aligns documentation and software development, enhancing the overall quality and reliability of both the codebase and the supporting documents. 51 | 52 | ### Create Runbooks 53 | 54 | Runbooks provide step-by-step guidance that helps teams resolve issues quickly and consistently, especially during high-stress or time-sensitive situations. They reduce the need to rely on tribal knowledge, make onboarding easier, and ensure continuity when team members are unavailable. A good runbook is concise, action-oriented, and regularly updated based on real-world usage. 55 | 56 | ## Adjacent Capabilities 57 | 58 | The following capabilities will be valuable for you and your team to explore, as they are either: 59 | 60 | - Related (they cover similar territory to Documentation Quality) 61 | - Upstream (they are a pre-requisite for Documentation Quality) 62 | - Downstream (Documentation Quality is a pre-requisite for them) 63 | 64 | ### [Continuous Integration](/capabilities/continuous-integration.md) - Downstream 65 | 66 | High-quality documentation is an essential pre-requisite for effective continuous integration, as it enables teams to understand the existing systems and make informed decisions about changes. 67 | 68 | ### [Learning Culture](/capabilities/learning-culture.md) - Upstream 69 | 70 | If a culture of learning and knowledge sharing hasn't been established, high-quality documentation will not be actively created or maintained. This can lead to large gaps of documentation or worse, out-of-date or incorrect information. 71 | 72 | ### [Code Maintainability](/capabilities/code-maintainability.md) - Related 73 | 74 | Good documentation is critical for maintaining code quality, as it helps developers understand the codebase and make informed decisions about changes. 75 | -------------------------------------------------------------------------------- /capabilities/empowering-teams-to-choose-tools.md: -------------------------------------------------------------------------------- 1 | # [Empowering Teams To Choose Tools](https://dora.dev/capabilities/teams-empowered-to-choose-tools/) 2 | 3 | This capability is about empowering teams to select the tools and technologies that best support their unique workflows and tasks. The DORA research has found that the most qualified decision-makers are the individual contributors themselves. When teams are empowered to choose their own tools, we tend to see higher software delivery performance and increases in job satisfaction. 4 | 5 | ## Nuances 6 | 7 | ### Balance Choice with Complexity 8 | 9 | It’s crucial to balance tool choice with the potential costs of acquisition, support, and added complexity that come with adding new tools to a tech stack. 10 | 11 | While a baseline set of tools should be established across an organization, teams should feel free to choose a new tool or technology if they feel strongly that it is best suited for their use case. Making this choice means they must also _support_ the new tool(s), which can be a tall order. 12 | 13 | Take, for example, a team's choice to use a different language. Once this decision is made, the team will need to build a new CI/CD pipeline and ensure its environments are suitable to run their code in the new language. They'll also need to introduce other new tooling that scans for security vulnerabilities and a host of other tasks, which the organization's platform team likely already handles. After all, the role of the platform team is to keep the technology stack compliant and efficient. 14 | 15 | As you can see, one team's decision to use another tool has added a layer of complexity in terms of support. 16 | 17 | ### Risk of Tool Overproliferation 18 | 19 | Allowing teams to choose their tools doesn't mean unrestricted freedom. 20 | 21 | Too much freedom in tool choice can lead to a fragmented tech stack, increasing technical debt and making the infrastructure more fragile over time. Each new tool requires maintenance and integration efforts, which can dilute the benefits of the new technology. For these reasons, it's essential to define a standardized process for evaluating and adopting new tools, and to ensure teams understand the implications of their choices. 22 | 23 | ## Assessment 24 | 25 | To assess how mature your team or organization is in this capability, complete this short exercise. 26 | 27 | Consider the descriptions below and score your team on this capability. Generally, score a 1 if your team's toolset is insufficient and there is no clear way for team members to adopt new technologies, a 2 if your team has an adequate but limited toolset and you feel there is a LOT of room for improvement, a 3 if your team's toolset is capable but there is some room for improvement, and a 4 if your team is using superior tools and is empowered to recommend new tools when necessary. 28 | 29 | Don't worry if the description doesn't exactly match your situation. These descriptions are meant to be examples of situations that would qualify for the associated score. 30 | 31 | 1. Insufficient Tools: The current tools are inadequate for getting the job done, and there is no clear way to evaluate or adopt new ones. 32 | 2. Adequate but Limited: The current tools are sufficient but limited, and new tools are occasionally adopted through an informal process. 33 | 3. Capable and Evolving: The current tools are capable of meeting needs, and a standardized process is in place for evaluating and adopting new tools should the need arise. 34 | 4. Best-in-Class Tools: The best tools available are used to get the job done, and new tools are proactively researched and teams are empowered to recommend their adoption via a standardized process. 35 | 36 | The number you selected represents your overall score for this capability. If you feel like your team or organization fits somewhere in between two scores, it's okay to use a decimal. For example, if you think your team's toolset represents something between adequate and capable, then you would score a 2.5. 37 | 38 | Generally, an overall score equal to or less than 3 means you'll likely gain a lot of value from experimenting with some of the supporting practices listed below. An overall score higher than 3 generally means you and your team are largely proficient, or well on your way to becoming proficient, in being empowered to choose tools; you would likely benefit from evaluating your scores in other capabilities. 39 | 40 | ## Supporting Practices 41 | 42 | The following is a curated list of supporting practices to consider when looking to empower your team to choose tools. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 43 | 44 | ### [Establish Golden Paths](/practices/establish-golden-paths.md) 45 | 46 | Spotify coined the term [Golden Path](https://engineering.atspotify.com/2020/08/how-we-use-golden-paths-to-solve-fragmentation-in-our-software-ecosystem/), which means selecting a set of officially supported tools that engineers throughout the organization can use. This baseline tooling should be comprehensive enough to address most organizational needs, including programming languages, libraries, testing tools, data storage, and infrastructure. This practice promotes standardization while allowing flexibility for exceptional cases. The benefit is a balanced approach to tooling that minimizes the risk of technical debt while allowing teams to deliver value to their stakeholders using the best tools available. 47 | 48 | ### [Schedule Regular Tooling Audits](/practices/schedule-regular-tooling-audits.md) 49 | 50 | Conduct regular audits of the organization's toolset to evaluate its effectiveness and relevance. During these audits, discuss and document the benefits and drawbacks of current tools, and explore new technologies. Make sure to invite commentary from all levels of the organization. This practice helps keep the tools aligned with the organization’s evolving goals and fosters a culture of continuous improvement and adaptation. 51 | 52 | ### [Schedule Time for Experimentation](/practices/schedule-time-for-experimentation.md) 53 | 54 | Regularly allocate time for teams to experiment with new tools, such as through hackathons or innovation days. Encourage team members to test new technologies and assess their suitability for the organization's needs. This practice fosters a culture of experimentation and growth, which can lead to significant performance improvements and greater team engagement. 55 | 56 | ## Adjacent Capabilities 57 | 58 | The following capabilities will be valuable for you and your team to explore, as they are either: 59 | 60 | - Related (they cover similar territory to Empowering Teams to Choose Tools) 61 | - Upstream (they are a pre-requisite for Empowering Teams to Choose Tools) 62 | - Downstream (Empowering Teams to Choose Tools is a pre-requisite for them) 63 | 64 | ### [Generative Organizational Culture](/capabilities/generative-organizational-culture.md) - Related 65 | 66 | A generative organizational culture promotes learning, innovation, and trust, which are essential for empowering teams to make their own tool choices. By fostering a culture that values experimentation and feedback, teams are more likely to feel confident in recommending and selecting tools that best fit their evolving needs. 67 | 68 | ### [Team Experimentation](/capabilities/team-experimentation.md) - Related 69 | 70 | Team experimentation is closely related to empowering teams to choose tools, as it encourages a mindset of testing and evaluating new approaches. A culture that values experimentation allows teams to rapidly iterate and find the best solutions for their specific challenges. 71 | 72 | ### [Job Satisfaction](/capabilities/job-satisfaction.md) - Downstream 73 | 74 | The DORA research has found that job satisfaction tends to be higher in teams that feel empowered to choose their own tools. 75 | 76 | ### [Code Maintainability](/capabilities/code-maintainability.md) - Downstream 77 | 78 | The DORA research has found that code maintainability tends to be higher in teams that feel empowered to choose their own tools. 79 | -------------------------------------------------------------------------------- /capabilities/monitoring-systems-to-inform-business-decisions.md: -------------------------------------------------------------------------------- 1 | # [Monitoring Systems to Inform Business Decisions](https://dora.dev/capabilities/monitoring-systems/) 2 | 3 | Monitoring isn’t just about uptime -- it’s about insight. The real power of monitoring lies in connecting system signals to business outcomes. 4 | 5 | Every system emits data: response times, errors, user behavior. But these aren’t just technical metrics; they're proxies for customer experience, revenue, cost, and product impact. 6 | 7 | When you use metrics to inform business decisions, the question shifts from simply *What’s happening?* to *What does it mean?* Why aren't users adopting our latest feature? Where are we leaking revenue? Which investments are actually moving the needle? These questions can be answered when the data is available, accessible, and analyzed properly. 8 | 9 | When teams frame monitoring around business questions, data becomes a tool for learning, not just for observing daily operations. This shift encourages hypotheses, fuels iteration, and ties technical work to customer value. 10 | 11 | Monitoring also shortens the feedback loop. Insights from operations or support flow upstream to dev and product, enabling checks on earlier decisions and faster course-correction. Start with business goals, then instrument your systems to ask, and answer, *the right questions*. Make insights accessible, relevant, and timely for all stakeholders. 12 | 13 | Monitoring done well turns noise into narrative and day-to-day visibility into strategic clarity. 14 | 15 | ## Nuances 16 | 17 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this capability. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the capability with your teams. 18 | 19 | ### No Ownership or Accountability 20 | 21 | Monitoring data is only useful if someone acts on it. When no one is clearly responsible for interpreting or responding to signals, issues linger or repeat. Assigning ownership for specific metrics or alerts helps ensure follow-through and continuous improvement. 22 | 23 | ### Tracking Everything and Understanding Nothing 24 | 25 | Monitoring everything can create noise and overwhelm teams with data, making it difficult to pinpoint critical insights. The goal isn’t more data, it’s to access and analyze the right data to make better decisions. When you contextualize data with historical comparisons or business relevance, you can more fully understand and use it to make informed decisions. 26 | 27 | ## Assessment 28 | 29 | To assess how mature your team or organization is in this capability, complete this short exercise. 30 | 31 | Consider the descriptions below and score yourself on this capability. Generally, score a 1 if monitoring is limited or completely lacking from your team or organization, a 2 if it is basic and you feel there is a LOT of room for improvement, a 3 if it is maturing and you feel there is some room for improvement, and a 4 if your team or organization is exemplary in the area of Monitoring Systems to Inform Business Decisions. 32 | 33 | Don't worry if the description doesn't exactly match your situation. These descriptions are meant to be examples of situations that would qualify for the associated score. 34 | 35 | 1. **Ad-hoc Monitoring:** Monitoring is done on an as-needed basis, with little formal process or visibility into system performance. Data is not used to inform business decisions. 36 | 2. **Basic Monitoring:** Some monitoring data is collected and reported, but it is not regularly used to inform business decisions. 37 | 3. **Mature Monitoring:** Monitoring data is regularly collected and used to inform business decisions, but there is room for improvement in terms of data quality and scope. 38 | 4. **Strategic Monitoring:** Monitoring is a key part of the organization's strategy, with high-quality data collected and used to drive business decisions and optimize system performance. 39 | 40 | The number you selected represents your overall score for this capability. If you feel like your team or organization fits somewhere in between two scores, it's okay to use a decimal. For example, if you think your team or organization has somewhere between basic and mature monitoring, then you would score a 2.5. 41 | 42 | Generally, an overall score equal to or less than 3 means you'll likely gain a lot of value from experimenting with some of the supporting practices listed here. An overall score higher than 3 generally means you and your team are largely proficient, or well on your way to becoming proficient, in the area of Monitoring Systems to Inform Business Decisions; you would likely benefit from evaluating your scores in other capabilities. 43 | 44 | ## Supporting Practices 45 | 46 | The following is a curated list of supporting practices to consider when looking to improve your team's Monitoring Systems to Inform Business Decisions capability. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 47 | 48 | ### Adopt Double-loop Learning 49 | 50 | Double-loop learning goes beyond tracking outcomes; it connects your work to the assumptions behind your strategy. Instead of just asking *Are we hitting our numbers?*, this practice encourages you to ask *Are we working on the right things to drive those numbers, and are our assumptions still valid?* 51 | 52 | When you practice double-loop learning, you map the relationships between projects, input metrics, and business KPIs. This lets you: 53 | 54 | - See how current efforts are (or aren’t) moving the right metrics 55 | - Adjust course when assumptions prove false 56 | - Align teams by making strategy visible and testable 57 | 58 | Rather than treating dashboards as static reports, double-loop learning turns them into dynamic systems for continuous improvement. 59 | 60 | ### Train Teams on Data Interpretation 61 | 62 | Provide training sessions for teams to understand how to interpret monitoring data and apply it to their roles. Focus on teaching team members how to relate metrics to business objectives. This will enable more informed decision making and provide context for operational changes. 63 | 64 | ## Adjacent Capabilities 65 | 66 | The following capabilities will be valuable for you and your team to explore, as they are either: 67 | 68 | - Related (they cover similar territory to Monitoring Systems to Inform Business Decisions) 69 | - Upstream (they are a pre-requisite for Monitoring Systems to Inform Business Decisions) 70 | - Downstream (Monitoring Systems to Inform Business Decisions is a pre-requisite for them) 71 | 72 | ### [Team Experimentation](/capabilities/team-experimentation.md) - Downstream 73 | 74 | Monitoring systems provide the data necessary for teams to experiment with confidence, enabling them to measure the impact of changes and validate hypotheses. In turn, successful experimentation relies on robust monitoring to inform decisions and adjust strategies quickly. 75 | 76 | ### [Monitoring and Observability](/capabilities/monitoring-and-observability.md) - Upstream 77 | 78 | Monitoring and Observability provide the data and platforms that inform sound business decision making. Without reliable signals from production systems, it’s difficult to measure user behavior, system impact, or the success of new features. 79 | 80 | ### [Visual Management](/capabilities/visual-management.md) - Upstream 81 | 82 | Visual Management makes monitoring data easy to see and understand. It helps teams and stakeholders quickly spot what’s working, what’s broken, and what needs attention. Without clear visuals, important trends or problems can get missed. When monitoring data is shown in simple, useful ways -- like dashboards or kanban boards -- it has a greater impact on business decisions. 83 | -------------------------------------------------------------------------------- /capabilities/pervasive-security.md: -------------------------------------------------------------------------------- 1 | # [Pervasive Security](https://dora.dev/capabilities/pervasive-security/) 2 | 3 | Under Construction 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /capabilities/test-data-management.md: -------------------------------------------------------------------------------- 1 | # [Test Data Management](https://dora.dev/capabilities/test-data-management/) 2 | 3 | High-performing teams implement a Test Data Management (TDM) strategy to ensure tests reliably confirm expected system behaviors. They achieve this by making relevant and realistic data readily available. Effective TDM enables teams to confidently release high-quality software faster by ensuring test data reliability. 4 | 5 | There are many different types of tests: unit, integration, end-to-end,  regression, performance, exploratory, etc. Effective TDM strategies consider the kind of test, the specific context of each test environment, and the data requirements unique to the software being tested. 6 | 7 | Some examples of various types of tests and their data needs: 8 | 9 | - **Unit tests** typically require small, simple, and deterministic bits of data or state. These are often ad-hoc and work best when combined with common stub or dummy data helpers like factories. 10 | - **Integration tests** may need an in-memory or lightweight database. These tests benefit from a controlled data seeding script that reflects expected relationships between entities. 11 | - **End-to-end tests** often require a more complete environment. Using Docker volumes for test databases or allowing some data to be marked as test-specific data (e.g., with a special tag or prefix) can make setup and teardown of tests easier. 12 | - **Performance tests** usually need environments that very closely resemble production, including database sizes, infrastructure, and traffic patterns. Using anonymized production data is often a practical option, but teams should weigh the security risks before going that route. 13 | - **Exploratory tests** depend on flexible, manually curated data or scenarios. Having the ability to quickly generate or manipulate data on demand is helpful. 14 | - **Security tests** often need both typical and atypical input data to probe for vulnerabilities and weaknesses in data handling or access control. 15 | 16 | Understanding the needs of each test type helps shape a TDM strategy that enables faster, safer, and more reliable software delivery. 17 | 18 | ## Nuances 19 | 20 | This section highlights common pitfalls teams face with Test Data Management. Awareness of these challenges empowers your team to implement this capability effectively and avoid costly missteps. 21 | 22 | ### Risks of Copying Production Data 23 | 24 | Copying production data or replaying production traffic in test environments can make tests feel realistic, but it often introduces security risks. Even if you scrub sensitive info, mistakes still happen. You might miss a schema change or run into a bug in the scrubbing tool. Sometimes that's okay, but in other cases, it's a serious problem. To stay safe, use clear data policies, double-check your scrubbing process, and use fake or synthetic data when the cost of a leak is high. 25 | 26 | ### Outdated or Irrelevant Test Data 27 | 28 | Test data becomes stale for many reasons: leftover data from test runs, time-sensitive use cases, schema or business rule changes, and updates to external dependencies like APIs or third-party services. When mocks or test fixtures fall out of sync with real systems, tests may pass while production fails. Regular validation and cleanup help keep tests meaningful and trustworthy. 29 | 30 | ### Inadequate Test Data Isolation 31 | 32 | Sharing test data across tests or environments often leads to inconsistent results, especially during parallel runs. For example, if multiple tests rely on the same user account, simultaneous access can introduce conflicts that real users wouldn’t encounter. This lack of isolation makes debugging harder and increases test flakiness. To ensure reliability, each test should generate and manage its own isolated data as part of its setup. 33 | 34 | ## Supporting Practices 35 | 36 | The following is a curated list of supporting practices to consider when looking to improve your team's Code Maintainability capability. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 37 | 38 | ### Create In-Memory Databases For Local Testing 39 | 40 | Running tests against an in-memory database speeds up execution, avoids reliance on external systems, and ensures full test isolation. Developers gain complete control over their data, enabling consistent, reproducible test conditions without affecting teammates. This approach improves both reliability and performance. Ultimately, that makes local testing faster and more predictable. 41 | 42 | ### [Create and Manage Ephemeral Environments](/practices/create-and-manage-ephemeral-environments.md) 43 | 44 | Ephemeral environments make it easier to manage test data by providing isolated, production-like spaces that can be spun up on demand. Each environment starts with a clean state, enabling teams to generate, load, and reset data without affecting others. This reduces test flakiness, ensures consistent results, and supports more accurate debugging. When integrated into CI/CD, ephemeral environments give fast, reliable feedback across app, data, and infrastructure changes. 45 | 46 | ### Use Data Generation Tools 47 | 48 | Leverage tools that automate test data creation based on predefined schemas and rules. Data generation tools help teams create relevant and varied datasets, enabling them to cover a wider range of test scenarios. By automating this process, teams reduce the time and effort spent on data management and improve test coverage. 49 | 50 | ### Shadow Production with Traffic Replay 51 | 52 | Replay scrubbed production traffic in test environments to validate changes using real-world data patterns. This technique improves test data realism without compromising user privacy, helping teams uncover edge cases and regressions that synthetic data often misses. It’s especially useful for testing config changes, infrastructure updates, and complex interactions that rely on authentic request flows. Combined with observability tools, traffic replay provides a powerful feedback loop for detecting easy to miss behavioral and performance anomalies in code changes. 53 | 54 | ## Adjacent Capabilities 55 | 56 | The following capabilities will be valuable for you and your team to explore, as they are either: 57 | 58 | - Related (they cover similar territory to Test Data Management) 59 | - Upstream (they are a pre-requisite for Test Data Management) 60 | - Downstream (Test Data Management is a pre-requisite for them) 61 | 62 | ### [Test Automation](/capabilities/test-automation.md) - Downstream 63 | 64 | Test automation works best when it has the right data. If test data is missing, wrong, or hard to set up, automated tests can fail or miss important bugs. Good test data management makes sure tests have what they need, without requiring manual setup. This helps teams run more tests, faster, and with more confidence. 65 | 66 | ### [Continuous Integration](/capabilities/continuous-integration.md) - Downstream 67 | 68 | Continuous Integration works best when tests have access to the right data. Since code changes are merged often, tests need to run constantly, and they can’t do that if the test data is missing or unreliable. Good test data management keeps the process smooth by making sure every test run starts with the data it needs. This helps teams catch bugs early, avoid integration issues, and move faster with confidence. 69 | -------------------------------------------------------------------------------- /capabilities/visual-management.md: -------------------------------------------------------------------------------- 1 | # [Visual Management](https://dora.dev/capabilities/visual-management/) 2 | 3 | Visual Management, as a capability, means using displays to make work visible and actionable. In the same way factories use lights, color codes, and floor markings to highlight flow and flag issues, software teams can use kanban boards, deployment monitors, and system dashboards to surface progress, priorities, and problems in real time. The goal of visual management is not just to share information but to make obvious what’s normal, what’s not, and what needs to be done next. When visual cues are clear, teams can stay aligned, maintain focus on critical processes, and solve problems faster. 4 | 5 | ## Nuances 6 | 7 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this capability. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the capability with your teams. 8 | 9 | ### Systems-level Constraints Get Ignored 10 | 11 | When a board shows a failing build, high WIP, or missed throughput goals, it’s a signal, not a root cause. The mistake comes when teams treat that signal as the full story and rush to solve for the symptom without asking why the issue occurred in the first place. Ensure the deeper, systems-level root causes aren't overlooked. 12 | 13 | ### If Teams Don’t Choose It, They Won’t Use It 14 | 15 | Dashboards don’t drive behavior -- people do. If a metric isn’t meaningful to the team, it becomes background noise. When teams help choose what’s tracked, they’re more likely to question it, act on it, and improve it. Visual management is not just about measurement -- it’s about ownership. 16 | 17 | ### Ensure Tools Are Simple and Actionable 18 | 19 | The power of visual management lies in what you don’t need to explain. If a display takes more than a few seconds to understand, it’s not helping. It's common for dashboards to collect dust, partly because their utility is unclear or complicated. Good visualizations surface tension in the system and point to next steps without adding friction or noise. 20 | 21 | ### Evolve Visual Displays Regularly 22 | 23 | What was once insightful can become invisible through repetition. Visuals must grow with the team -- surfacing new risks, bottlenecks, or goals as they emerge. Treat dashboards like code: Refactor them often to reflect what matters now. 24 | 25 | ### Don’t Confuse Activity with Progress 26 | 27 | Highly active boards can give a false sense of momentum. Just because a lot is happening doesn't mean the right things are happening. Visual management should highlight key metrics and value flow, not just motion, so teams can spot when effort *isn’t* translating into outcomes. 28 | 29 | ## Assessment 30 | 31 | To assess how mature your team or organization is in this capability, complete this short exercise. 32 | 33 | Consider the descriptions below and score yourself on the Visual Management capability. Generally, score a 1 if visual management is limited or completely lacking from your team or organization, a 2 if it is basic and you feel there is a LOT of room for improvement, a 3 if it is informative and you feel there is some room for improvement, and a 4 if your team or organization is exemplary in the area of Visual Management. 34 | 35 | Don't worry if the description doesn't exactly match your situation. These descriptions are meant to be examples of situations that would qualify for the associated score. 36 | 37 | 1. **No Visibility:** No visual management displays or dashboards are used. Teams lack visibility into their processes and progress. 38 | 2. **Basic Dashboards:** Simple dashboards or visual displays are used, but they are not regularly updated. Teams do not actively use them to inform their work. 39 | 3. **Informative Displays:** Visual management displays are used to track key metrics and progress. Teams regularly review and update them to inform their work and identify areas for improvement. 40 | 4. **Real-time Feedback:** Advanced visual management displays provide real-time feedback and insights. Teams can quickly identify and address issues, and make data-driven decisions to adjust their priorities and drive continuous improvement. 41 | 42 | The number you selected represents your overall score for this capability. If you feel like your team or organization fits somewhere in between two scores, it's okay to use a decimal. For example, if you think visual management is somewhere between basic and informative, then you would score a 2.5. 43 | 44 | Generally, an overall score equal to or less than 3 means you'll likely gain a lot of value from experimenting with some of the supporting practices listed here. An overall score higher than 3 generally means you and your team are largely proficient, or well on your way to becoming proficient, in the area of Visual Management; you would likely benefit from evaluating your scores in other capabilities. 45 | 46 | ## Supporting Practices 47 | 48 | The following is a curated list of supporting practices to consider when looking to improve your team's Visual Management capability. While not every practice will be beneficial in every situation, this list is meant to provide teams with fresh, pragmatic, and actionable ideas to support this capability. 49 | 50 | ### Incorporate Visual Displays Into Team Planning Meetings 51 | 52 | Don’t let dashboards collect dust. Visual displays are most effective when they’re woven into the rhythm of team decision-making. Use them in planning meetings to ground conversations in reality. Look at what’s blocked, what’s flowing, and what’s at risk. When visuals reflect the team’s current situation, they become a shared language for prioritization and focus. 53 | 54 | ### Host Targeted Retrospectives Aimed At Refreshing Visual Displays 55 | 56 | Visual management isn’t a set-it-and-forget-it practice. Just like code or architecture, dashboards benefit from regular refactoring. Hosting retrospectives focused on what’s no longer useful, or what’s missing, helps keep displays sharp. Ask the team: What are we not seeing that we need to see? What’s become noise? The answers reveal what’s next for your display strategy. 57 | 58 | ### Shift Teams To Track Outcomes Instead of Outputs 59 | 60 | It’s easy to default to tracking what’s easy to count -- tickets closed, lines of code, story points. But these are outputs, not outcomes. To drive meaningful improvement, displays should connect work to its impact: customer behavior, system reliability, revenue generated, or time to resolve issues. When teams see the impact of their work, they can make smarter trade-offs and course-correct faster. 61 | 62 | ### Set Work-in-Process Limits 63 | 64 | While setting work-in-progress (WIP) limits is a DORA capability, it is also a technique that is actionable. So, we're including it here as a supporting practice. Visually tracking and enforcing WIP limits prevents bottlenecks and helps to maintain a steady flow. By limiting the number of tasks that are actively worked on, teams can achieve greater focus, reduce context switching, and enjoy enhanced flow efficiency. This leads to faster and smarter software delivery. 65 | 66 | ## Adjacent Capabilities 67 | 68 | The following capabilities will be valuable for you and your team to explore, as they are either: 69 | 70 | - Related (they cover similar territory to Visual Management) 71 | - Upstream (they are a pre-requisite for Visual Management) 72 | - Downstream (Visual Management is a pre-requisite for them) 73 | 74 | ### [Visibility of Work in the Value Stream](/capabilities/visibility-of-work-in-the-value-stream.md) - Related 75 | 76 | These two capabilities are closely related but serve different purposes in improving software delivery. Think of the value stream as your map and visual management as your GPS. Visibility of work in the value stream shows the entire route -- from idea to customer -- and helps spot systemic slowdowns like bottlenecks or rework loops. Visual management shows where you are right now, highlighting real-time progress, problems, and priorities. One gives strategic insight and the other supports day-to-day navigation. You need both to reach your destination efficiently. 77 | 78 | ### [Monitoring and Observability](/capabilities/monitoring-and-observability.md) - Upstream 79 | 80 | Metrics like user engagement, feature adoption, or system latency can be transformed into visual displays that give the team actionable feedback at a glance. Monitoring provides the raw signals. Visual Management then turns those signals into actionable cues that help the team spot issues, track improvements, and guide daily decisions. 81 | 82 | ### [Monitoring Systems to Inform Business Decisions](/capabilities/monitoring-systems-to-inform-business-decisions.md) - Downstream 83 | 84 | Visual Management makes monitoring data easy to see and understand. It helps teams and stakeholders quickly spot what’s working, what’s broken, and what needs attention. Without clear visuals, important trends or problems can get missed. When monitoring data is shown in simple, useful ways -- like dashboards or kanban boards -- it has a greater impact on business decisions. 85 | -------------------------------------------------------------------------------- /capabilities/work-in-process-limits.md: -------------------------------------------------------------------------------- 1 | # [Work in Process Limits](https://dora.dev/devops-capabilities/process/wip-limits/) 2 | 3 | Under Construction 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /contributions.md: -------------------------------------------------------------------------------- 1 | # Making A Contribution 2 | 3 | Thanks for considering making a contribution to our repository! We welcome new ideas and appreciate your willingness to share your experience with us. While this is a general purpose repository, it is associated with Pragmint's brand. Therefore, we reserve the right to reject contributions if our experiences don't support the suggestions. 4 | 5 | We have three main types of pages contained in this repository: 6 | 7 | 1. **Capability** - A proficiency that elite-level teams possess, which helps them deliver reliable software systems quickly. Capability pages explain both what to do (at a high level) and why to do it. These capabilities are defined by the [DORA research team](https://dora.dev/). We summarize their findings and add context as deemed appropriate. 8 | 2. **Practice** - An actionable pattern, technique, or process employed by software professionals. Each practice page provides ideas for implementing a capability at a more granular level. For example, running pair programming sessions is a practice teams can use to help implement the Code Maintainability capability. Not every practice is a good idea for every team. Each practice supports one or more DORA capabilities. 9 | 3. **Resource** - Reference material that provides additional context, guidance, or support for understanding and implementing a practice. Resources may include articles, books, videos, workshops, code katas, roundtable discussion points, etc. Generally, resource pages expand on the underlying _how_ behind practices. 10 | 11 | We are looking for contributions in the following areas: 12 | 13 | 1. New practices. 14 | 2. New resources. 15 | 3. Feedback on our existing practices or any ideas presented in this repo. 16 | 4. Additional content to illustrate to our readers how a capability, practice, or resource can be beneficial. 17 | 5. Additional content describing common pitfalls, challenges, or limitations teams commonly encounter when applying certain practices or capabilities. 18 | 6. Typos or grammatical fixes. 19 | 20 | If you chose to contribute a new practice or resource, please try to follow our pre-established structure. We've created templates with guided instructions so you can more easily make a contribution. You can check them out in the [templates directory](/templates/). 21 | -------------------------------------------------------------------------------- /practices/address-resource-constraints-incrementally.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/address-resource-constraints-incrementally.md -------------------------------------------------------------------------------- /practices/automate-coding-standards.md: -------------------------------------------------------------------------------- 1 | # Automate Coding Standards 2 | 3 | Automate Coding Standards is a practice about maintaining a high level of code quality through automation. 4 | It involves using tools to enforce coding standards and conventions automatically, ensuring that code is consistently formatted and adheres to predefined quality metrics. 5 | This practice improves code readability, reduce errors and facilitates the code review process by catching issues early without human intervention. 6 | It's a proactive approach to code quality, making it easier for teams to manage large code bases and collaborate effectively. 7 | 8 | ## Nuance 9 | 10 | ### Misconception: One-Size-Fits-All Approach 11 | Automated coding standards are highly configurable, allowing customization to fit specific project needs. 12 | It's a misconception that these tools enforce a rigid, universal standard across all projects. 13 | Team members should have the opportunity to suggest changes to the coding standards and those suggested changes should be discussed as a team. 14 | Strict adherence to automated standards without flexibility can stifle creativity. 15 | It's important to balance between maintaining code quality and allowing developers the freedom to innovate. 16 | 17 | ### Tool Limitations 18 | Automated tools may not catch every type of issue, particularly those related to complex logic or architecture. 19 | Developers should be mindful of these limitations and not solely rely on these tools for ensuring code quality. 20 | 21 | ### Legacy Code Challenges 22 | Incorporating automated coding standards tools into existing projects, especially large or legacy code-bases, can be challenging. 23 | Lint rules and code fixes must be introduced in an incremental way. 24 | 25 | ### Development Workflow Integration 26 | Automatic Coding Standards tools should be incorporated as part of the development process. 27 | There are many options to do this: build scrips, IDE/Editors, pre-commit, pre-push, during code review/pull requests or Continuous Integration (CI Pipeline). 28 | The specific approach to incorporate automatic coding standards as part of the development workflow will very depending on team preferences and the cost of running the process. 29 | 30 | ## How to Improve 31 | 32 | ### [Start A Book Club](/practices/start-a-book-club.md) 33 | 34 | - [Automate Your Coding Standard](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_04) 35 | 36 | This resource provides insights into the importance of automating coding standards to maintain code quality and consistency. 37 | It highlights how automated tools can help enforce coding conventions, making the codebase more manageable and the development process more efficient. 38 | 39 | - [One bite at a time](https://dev.to/christiankohler/one-bite-at-a-time-how-to-introduce-new-lint-rules-in-a-large-codebase-37ph) 40 | 41 | This resource is about introducing new lint rules in a large legacy codebase. 42 | It recommends a gradual approach, utilizing auto-fix capabilities wherever possible and manually addressing issues otherwise. 43 | Also proposes secondary lint configuration for new rules, applied only to modified files via a pre-commit hook. 44 | This method, inspired by the Boy Scout Rule of leaving code better than one found it. 45 | 46 | ### [Do A Spike](/practices/do-a-spike.md) 47 | 48 | Implement what you learned in the article [Automate Your Coding Standard](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_04) with a project or module of your codebase. 49 | 50 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 51 | 52 | * Are our automated coding standards tools customized to reflect our specific coding practices and project needs, or are we using a one-size-fits-all approach? 53 | * Do team members understand the reasons behind certain coding rules? 54 | 55 | ## Supporting Capabilities 56 | 57 | ### [Continuous Integration](https://dora.dev/devops-capabilities/technical/continuous-integration/) 58 | Automating Coding Standards practice improves Continuous Integration by ensuring that code committed to the repository adheres to predefined quality and style guidelines, facilitating smoother integration and fewer integration issues. 59 | 60 | ### [Code Maintainability](https://dora.dev/devops-capabilities/technical/code-maintainability/) 61 | This practice improves code maintainability by enforcing consistent coding standards across the codebase, making it easier to understand, modify, and extend the code over time. 62 | 63 | ### [Version Control](https://dora.dev/devops-capabilities/technical/version-control/) 64 | The practice of Automated Coding Standards requires robust version control systems to track and manage the enforcement of coding standards over time, ensuring that all code changes are compliant. 65 | 66 | ### [Streamlining change approval](https://dora.dev/devops-capabilities/process/streamlining-change-approval/) 67 | This automation reduces the need for extensive manual code reviews and oversight for style and basic issues, allowing teams to focus on more critical aspects of code quality and functionality during the review process. 68 | -------------------------------------------------------------------------------- /practices/automate-deployment.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/automate-deployment.md -------------------------------------------------------------------------------- /practices/automate-infrastructure-management.md: -------------------------------------------------------------------------------- 1 | # Automate Infrastructure Management 2 | 3 | Automate Infrastructure Management a practice that automates the provisioning and management of IT infrastructure through code rather than manual processes. 4 | Utilizing tools like Terraform, IaC allows teams to efficiently deploy and manage servers, storage, and networking in a consistent, repeatable manner. 5 | This approach enhances agility, reduces human error, and ensures secure, compliant infrastructure setups. 6 | 7 | ## Nuance 8 | 9 | ### Understanding the Complexity of Setup 10 | Setting up Infrastructure as Code (IaC) can initially be complex, especially for organizations transitioning from manual infrastructure management. 11 | The initial investment in learning and setting up IaC tools and practices requires time and effort. 12 | 13 | ### Version Control is Crucial 14 | Treating infrastructure code with the same rigor as application code, including version control, is essential for maintaining consistency. 15 | 16 | ### Security and Compliance Challenges 17 | Ensuring security and compliance within IaC practices is not automatic. Teams must incorporate security practices into their IaC workflows, such as scanning for vulnerabilities and enforcing policy as code, to safeguard their infrastructure. 18 | 19 | ### Over-Automation Can Lead to Issues 20 | While automation is a key benefit of IaC, over-automation without proper checks can lead to issues. It's crucial to balance automation with oversight to prevent unintended changes that could disrupt services. 21 | 22 | ### The Learning Curve for New Tools 23 | Adopting IaC often means learning new tools and languages, such as Terraform or Ansible. This learning curve can be a barrier for teams and requires dedicated time and resources to overcome. 24 | 25 | ### Environmental Parity Challenges 26 | Achieving parity across development, testing, and production environments is a goal of IaC. However, differences in these environments can lead to discrepancies, underscoring the need for comprehensive testing and validation strategies. 27 | 28 | ### Collaboration and Culture Shift 29 | Implementing IaC requires a shift in culture and collaboration within IT and development teams. Embracing IaC means moving away from siloed roles and towards more integrated DevOps practices. 30 | 31 | ### Dependence on External Providers 32 | Relying on external IaC tools and cloud providers introduces dependencies. It's important to understand the limitations and service agreements of these providers to avoid potential disruptions. 33 | 34 | ### The Importance of Documentation 35 | While IaC inherently documents infrastructure setups, additional documentation on the context, design decisions, and operational procedures is crucial for maintaining and scaling IaC practices effectively. 36 | 37 | ## How to Improve 38 | 39 | ### [Do A Spike](/practices/do-a-spike.md) 40 | 41 | #### IaC Tool Comparison 42 | 43 | Compare at least two IaC tools (e.g., Terraform vs. Ansible) by setting up a simple infrastructure (such as a web server) using both. Understand the strengths and weaknesses of each tool in terms of syntax, ecosystem, and community support. 44 | 45 | #### CI/CD Integration 46 | 47 | Integrate your IaC setup with a CI/CD pipeline (using Jenkins, GitLab CI, or GitHub Actions) to automate the deployment of infrastructure changes. 48 | Learn how automation in deployment processes reduces manual errors and speeds up delivery times. 49 | 50 | ### [Lead Workshops](/practices/lead-workshops.md) 51 | 52 | #### Immutable Infrastructure Deployment 53 | 54 | Deploy a set of infrastructure components, then simulate a "disaster" by destroying them. Re-deploy using only your IaC scripts. Gain confidence in the immutability and recoverability of your infrastructure through IaC practices. 55 | 56 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 57 | 58 | #### State of you Automation 59 | 60 | * Are you leveraging the latest tools and practices in IaC to ensure your infrastructure management is as efficient and secure as possible? 61 | 62 | * Consider whether your current approach to automation fully meets the needs of your organization's evolving infrastructure. 63 | 64 | #### Immutability 65 | 66 | * Reflect on the degree to which your infrastructure can be recreated from scratch with minimal manual intervention. 67 | * How does this impact your disaster recovery and scaling strategies? 68 | 69 | #### CI/CD Pipeline 70 | 71 | * Evaluate how seamlessly IaC is integrated into your continuous integration and continuous deployment (CI/CD) processes. 72 | * Are there areas where further automation or integration could reduce bottlenecks and improve deployment times? 73 | 74 | 75 | #### How Collaborative Is Your IaC Approach? 76 | 77 | * Think about the level of collaboration between your development, operations, and security teams in managing and evolving your IaC strategy. 78 | * Is there a culture of shared responsibility and knowledge sharing, or are silos hindering your progress? 79 | 80 | ### [Start A Book Club](/practices/start-a-book-club.md) 81 | 82 | #### [Codify your infrastructure so it can also be version controlled](https://dzone.com/articles/secure-terraform-delivery-pipeline-best-practices) 83 | 84 | This resource provides a comprehensive guide on implementing a secure Terraform delivery pipeline, emphasizing the importance of codifying infrastructure to leverage version control. It outlines best practices for managing infrastructure as code (IaC) securely, including how to automate the deployment process, enforce policy as code, and integrate security checks. The article is valuable for understanding how to efficiently and securely manage infrastructure changes within a version-controlled environment. 85 | 86 | ## Supporting Capabilities 87 | 88 | ### [Continuous Integration](https://dora.dev/devops-capabilities/technical/continuous-integration/) #core 89 | Infrastructure as Code (IaC) can improve Continuous Integration by automating the provisioning of test environments. This ensures that code can be integrated and tested frequently, reducing integration issues and accelerating development cycles. 90 | 91 | ### [Continuous Delivery](https://dora.dev/devops-capabilities/technical/continuous-delivery/) #core 92 | IaC automates and documents the process for deploying applications, making Continuous Delivery (CD) achievable by ensuring that every change can be deployed to production safely and quickly. 93 | 94 | ### [Deployment Automation](https://dora.dev/devops-capabilities/technical/deployment-automation/) #core 95 | IaC ensures that the infrastructure deployment is repeatable, predictable, and scalable. 96 | 97 | ### [Version Control](https://dora.dev/devops-capabilities/technical/version-control/) #core 98 | Automate Infrastructure Management practice enhances version control by allowing infrastructure to be versioned and tracked along with application code. 99 | 100 | ### [Test Automation](https://dora.dev/devops-capabilities/technical/test-automation/) #core 101 | IaC supports test automation by ensuring consistent, reproducible environments for testing. 102 | Automated tests can be run in environments that closely mimic production, improving test accuracy. 103 | 104 | ### [Flexible Infrastructure](https://dora.dev/devops-capabilities/technical/flexible-infrastructure/) #core 105 | IaC provides the ability to quickly provision, configure, and decommission infrastructure resources on demand, leading to a more flexible and responsive IT infrastructure. 106 | 107 | ### [Monitoring and Observability](https://dora.dev/devops-capabilities/technical/monitoring-and-observability/) #core 108 | IaC can automate the setup of monitoring and logging tools across environments, ensuring comprehensive observability and the ability to react to issues based on real-time data. 109 | 110 | ### [Database Change Management](https://dora.dev/devops-capabilities/technical/database-change-management/) #core 111 | IaC facilitates database change management by automating database provisioning, updates, and rollbacks, ensuring consistency across environments. 112 | 113 | ### [Empowering Teams to Choose Tools](https://dora.dev/devops-capabilities/technical/teams-empowered-to-choose-tools/) #core 114 | IaC empowers teams by allowing them to define infrastructure through code using tools that best fit their project requirements and workflows. 115 | -------------------------------------------------------------------------------- /practices/automate-test-coverage-checks.md: -------------------------------------------------------------------------------- 1 | # Automate Test Coverage Checks 2 | 3 | Automating test coverage ensures there is a baseline of test coverage for your software. 4 | Following this practice won't guarantee the quality or reliability of your tests. As such, it's not a sufficient check by itself. 5 | Nevertheless, it's usually a low-cost way to spot gaps in your codebase's test coverage. 6 | Integrating these checks into CI pipelines ensures continuous validation without slowing down development. 7 | 8 | ## Nuance 9 | 10 | ### Coverage Metrics vs. Test Quality 11 | 12 | It's important to prioritize the quality of tests over coverage percentages. 13 | Teams may focus solely on increasing coverage numbers without ensuring that tests are effective in catching bugs and edge cases. 14 | 15 | ### Balancing Speed and Coverage 16 | 17 | While automating test coverage checks speeds up validation processes, overemphasizing coverage goals can lead to diminishing returns. 18 | Setting overly ambitious coverage targets may slow down development or lead to superficial tests that don't add substantial value. 19 | It's important to strike a balance between achieving sufficient coverage and maintaining a productive development pace. 20 | 21 | ### Non-Functional Test Considerations 22 | 23 | Automated test coverage often focuses on functional aspects of software, such as correctness and behavior. 24 | However, neglecting non-functional tests—like performance, security, and usability—can leave important aspects of automated test quality out. 25 | Integrating non-functional tests into automated pipelines ensures comprehensive software validation. 26 | For instance, performance tests can identify bottlenecks, security tests can detect vulnerabilities, and usability tests can improve user experience. 27 | None of those types of tests fit neatly into a traditional "coverage" check. 28 | 29 | ### Continuous Improvement 30 | 31 | Automating test coverage checks should not be a one-time setup but an ongoing process of refinement and improvement. 32 | Teams should regularly review and adjust coverage thresholds based on evolving project requirements, feedback from testing outcomes, and changes in software functionality.### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 33 | 34 | ## How to Improve 35 | 36 | ### [Start A Book Club](/practices/start-a-book-club.md) 37 | 38 | #### [Test Coverage](https://martinfowler.com/bliki/TestCoverage.html) 39 | 40 | In his blog post on test coverage, Martin Fowler explores the concept of test coverage as a tool for identifying untested code rather than as a definitive measure of test quality. 41 | He argues that while high test coverage percentages can highlight which parts of the code are exercised by tests, they do not necessarily indicate the effectiveness of those tests. 42 | Fowler emphasizes that test coverage should be used alongside other techniques and metrics to assess the robustness of tests, and that focusing solely on coverage numbers can lead to superficial or inadequate testing. 43 | He advocates for a balanced approach that combines test coverage with thoughtful test design and evaluation to achieve meaningful software quality. 44 | 45 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 46 | 47 | #### Tailoring and Adjusting Test Coverage 48 | 49 | * Are our current coverage thresholds realistic and tailored to the specific needs of different modules within our application? 50 | * How often do we review and adjust our coverage metrics to align with evolving project requirements? 51 | 52 | #### Effectiveness of Test Coverage 53 | 54 | * Do our tests catch bugs and edge cases, or are they merely boosting our coverage numbers? 55 | * Are we adequately addressing non-functional testing, such as performance, security, and usability, in our automated test coverage? 56 | 57 | #### Challenges and Lessons in Test Coverage Implementation 58 | 59 | * Are there any cultural or organizational barriers that prevent us from fully implementing this practice? 60 | * What lessons can we learn from past experiences to enhance our future approach to automated test coverage? 61 | 62 | ## Supporting Capabilities 63 | 64 | ### [Test Automation](/capabilities/test-automation.md) 65 | 66 | Automating test coverage checks supports the Test Automation capability by ensuring continuous and immediate feedback on code changes within the CI pipeline. 67 | This practice identifies untested code early, helping prevent bugs and regressions, and aligns with a consistent testing strategy. 68 | By maintaining realistic coverage thresholds for different modules, it optimizes testing efforts, enhances collaboration between testers and developers, and ultimately improves software quality and stability throughout the delivery lifecycle. 69 | -------------------------------------------------------------------------------- /practices/backup-data-daily.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/backup-data-daily.md -------------------------------------------------------------------------------- /practices/build-a-single-binary.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/build-a-single-binary.md -------------------------------------------------------------------------------- /practices/build-consistent-testing-strategy.md: -------------------------------------------------------------------------------- 1 | # Build a Consistent Testing Strategy 2 | 3 | Building a consistent testing strategy involves matching your approach to the specific needs of your project and considering the influence of your technology stack. 4 | Prioritize testing critical and complex code areas, and select appropriate types of tests, such as unit, integration, and end-to-end tests. 5 | Balance the proportion of each test type, favoring more unit tests and fewer high-cost tests like end-to-end tests. 6 | Ensure comprehensive test coverage for new and modified code by including tests in the definition of done and verifying them during code reviews. 7 | Manage expensive tests by regularly reviewing, deleting, and replacing them with more efficient alternatives. 8 | Integrate tests into the continuous integration pipeline and spread knowledge of the testing strategy and best practices among the development team. 9 | 10 | ## Nuance 11 | 12 | ### One-Size-Fits-All Testing Strategy 13 | 14 | A common misconception is that a single testing strategy can be applied to all projects. 15 | In reality, each project has unique requirements and constraints, which means that the testing strategy must be customized accordingly. 16 | Factors such as project size, complexity, technology stack, and team expertise influence the choice of testing tools, frameworks, and the balance of different test types. 17 | 18 | ### Balancing Test Coverage and Maintenance 19 | 20 | While comprehensive test coverage is important, it is equally important to balance it with the maintainability of tests. 21 | Over-ambitious coverage goals can lead to a large test suite that is difficult to maintain and slows down the development process. 22 | Focus on covering critical paths and high-risk areas thoroughly, and ensure that tests are easy to update as the codebase evolves. 23 | Regularly refactor tests to keep them relevant and maintainable. 24 | 25 | ### Excessive End-to-End Testing 26 | 27 | End-to-end tests are important for verifying the complete functionality of an application, but they are also resource-intensive and time-consuming. 28 | Relying too heavily on end-to-end tests can slow down the continuous integration pipeline and make the testing process less efficient. 29 | Additionally, end-to-end tests are prone to flakiness due to their reliance on multiple integrated components and external dependencies. 30 | When these tests fail intermittently, it can be challenging to pinpoint the exact cause, leading to wasted time investigating false negatives or non-reproducible issues. 31 | To mitigate these challenges, it's advisable to tests with more targeted unit and integration tests that provide faster feedback on specific code functionalities, reducing the overall dependency on end-to-end testing. 32 | 33 | ### Manual Testing is Obsolete 34 | 35 | With the rise of automation, there is a misconception that manual testing is no longer necessary. However, manual exploratory testing plays a critical role in identifying unexpected issues and usability problems that automated tests might miss. Manual testers bring human intuition and creativity to the testing process, uncovering edge cases and user experience issues that automated scripts cannot replicate. 36 | 37 | ## How to Improve 38 | 39 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 40 | 41 | #### Evaluate Current Testing Strategy 42 | 43 | Schedule a team meeting to review your current testing strategy. 44 | Analyze your existing practices, tools, and processes. 45 | Document the strengths, weaknesses, and any gaps in your approach. 46 | Discuss potential improvements and prioritize them based on impact and feasibility. 47 | After implementing changes, reassess after a few sprints to measure improvements in bug detection and code quality. 48 | 49 | 50 | 51 | ### [Lead a Workshop](/practices/lead-a-workshop.md) 52 | 53 | #### Develop a Testing Strategy Document 54 | 55 | Create a detailed testing strategy document that outlines your project's specific testing needs. Include information on test types, coverage goals, and maintenance practices. 56 | Share this document with your team and solicit feedback. 57 | Implement the strategy and monitor its impact on your testing process and overall code quality. 58 | 59 | #### Balance Your Test Pyramid 60 | 61 | Analyze your current distribution of unit, integration, and end-to-end tests. 62 | Compare it to the ideal test pyramid, which suggests having more unit tests, fewer integration tests, and even fewer end-to-end tests. 63 | Adjust your test suite to better align with this model, and observe changes in test execution times and reliability. 64 | 65 | #### Integrate Tests into Continuous Integration (CI) Pipeline 66 | 67 | If not already done, integrate your tests into the CI pipeline. 68 | Ensure that all tests run automatically on each code commit. 69 | 70 | #### Conduct Manual Exploratory Testing Sessions 71 | 72 | Schedule regular manual exploratory testing sessions where team members test the application without predefined scripts. 73 | Focus on uncovering usability issues and edge cases. 74 | Compare the issues found through these sessions with those found through automated testing to understand the value added by manual exploration. 75 | 76 | ### [Run Pair Programming Sessions](/practices/run-pair-programming-sessions.md) 77 | 78 | ### Share and Educate Testing Best Practices 79 | 80 | Organize workshops or knowledge-sharing sessions where team members can learn about and discuss testing best practices. 81 | Encourage the team to share their experiences and tips. 82 | Track how this knowledge transfer impacts the quality and consistency of your testing efforts. 83 | 84 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 85 | 86 | #### Tailoring Your Testing Strategy 87 | 88 | * How well does your current testing strategy align with the specific needs and constraints of your project? 89 | * What unique aspects of your project (e.g., technology stack, team expertise) might require a different testing approach? 90 | 91 | #### Balancing Test Coverage and Maintenance 92 | 93 | * How often do you refactor your tests to ensure they remain relevant and maintainable as your codebase evolves? 94 | * How do you currently handle resource-intensive and time-consuming tests, such as end-to-end tests? 95 | * Are there opportunities to replace some of these expensive tests with more efficient alternatives? 96 | 97 | #### Integrating Manual Testing 98 | 99 | * Have you considered the role of manual exploratory testing in your overall strategy? 100 | * How can you use the creativity and intuition of manual testers to uncover issues automated tests might miss? 101 | 102 | #### Continuous Integration and Testing 103 | 104 | * How well integrated are your tests into your continuous integration pipeline? 105 | * Are your tests providing timely and reliable feedback during the development process? 106 | 107 | #### Spreading Testing Knowledge 108 | 109 | * How effectively is your testing strategy and best practices communicated and shared within your development team? 110 | * What steps can you take to ensure everyone on the team understands and follows the testing strategy? 111 | 112 | ## Supporting Capabilities 113 | 114 | ### [Test Automation](/capabilities/test-automation.md) 115 | 116 | Building a Consistent Testing Strategy supports the Test Automation capability by providing a structured approach to testing that aligns with project needs. 117 | It emphasizes balanced test coverage, integrates tests into CI pipelines for fast feedback, and combines automated and manual testing to ensure comprehensive quality and stability of software delivery. -------------------------------------------------------------------------------- /practices/check-documentation-consistency.md: -------------------------------------------------------------------------------- 1 | # Check Documentation Consistency 2 | 3 | This practice ensures that documentation remains aligned with the code and technical artifacts it describes. 4 | This involves regular updates and reviews to keep information current and accurate. 5 | By doing so, it reduces confusion, ease onboarding, and enhances project clarity and communication. 6 | 7 | ## Nuance 8 | 9 | ### Balancing Detail and Clarity 10 | Striking the right balance between providing enough detail to be useful and keeping the documentation clear and concise is crucial. 11 | Overly detailed documentation can be as confusing as documentation that's too vague. 12 | 13 | ### Version Control Challenges 14 | Managing documentation alongside different versions of code can introduce complexities. 15 | Ensuring that documentation reflects the correct version of the software it describes requires careful attention. 16 | 17 | ### Resource Allocation 18 | Documentation consistency checks require time and resources. 19 | Teams must balance the effort between writing code and updating documentation, which can be challenging in fast-paced development environments. 20 | 21 | ### Automated vs. Manual Updates 22 | While some aspects of documentation can be automated, such as API documentation generation, other parts require manual intervention. 23 | Deciding what to automate and what to manually update is a nuanced decision that impacts consistency and efficiency. 24 | 25 | ### Audience Consideration 26 | The intended audience for the documentation (developers, end-users, stakeholders) affects how consistency is maintained. 27 | Technical details necessary for developers might not be relevant for end-users, requiring different versions or layers of documentation. 28 | 29 | ### Cultural Shifts 30 | Emphasizing documentation consistency often requires a cultural shift within the organization. 31 | Teams accustomed to prioritizing development over documentation may need to adjust their approach and values. 32 | 33 | ### Tooling and Infrastructure 34 | The choice of tools and infrastructure for managing documentation (e.g., wikis, documentation generators) can significantly impact the ease and effectiveness of maintaining consistency. 35 | 36 | ### Feedback Loops 37 | Establishing feedback loops with documentation users can help identify inconsistencies and areas for improvement, but managing this feedback effectively without overwhelming the team is a nuanced challenge. 38 | 39 | ### Documentation Decay 40 | Over time, even well-maintained documentation can become outdated if not regularly reviewed and updated, especially in rapidly evolving projects. 41 | Recognizing and addressing documentation decay is a continuous effort. 42 | 43 | ### Knowledge Silos 44 | Avoiding knowledge silos where only certain team members know how to update documentation is crucial for consistency. 45 | Ensuring that knowledge and responsibility are shared across the team prevents bottlenecks. 46 | 47 | ### [Lead Workshops](/practices/lead-workshops.md) 48 | 49 | #### Documentation Audience Analysis 50 | 51 | Conduct an analysis of your documentation's audience. Identify the different groups that use your documentation (e.g., developers, end-users, stakeholders) and assess whether the current documentation meets their needs. Adjust your documentation strategy based on the findings. 52 | 53 | #### Cross-Functional Documentation Workshops 54 | 55 | Host a workshop with members from different teams (development, QA, support, etc.) to collaboratively review and update sections of the documentation. This will help identify inconsistencies and gaps from diverse perspectives and foster a shared responsibility for documentation. 56 | 57 | ### [Dogfood Your Systems](/practices/dogfood-your-systems.md) 58 | 59 | #### Documentation Usability Testing 60 | 61 | Organize a usability testing session for your documentation with participants from your intended audience groups. Collect feedback on clarity, usefulness, and accessibility, and use this feedback to make targeted improvements. 62 | 63 | ### [Run Pair Programming Sessions](/practices/run-pair-programming-sessions.md) 64 | 65 | #### Knowledge Sharing Sessions 66 | 67 | Organize regular knowledge-sharing sessions where team members can present on areas of the codebase or technical artifacts they are experts in. Use these sessions to fill gaps in the documentation and ensure knowledge is not siloed. 68 | 69 | ### [Start A Book Club](/practices/start-a-book-club.md) 70 | 71 | ### [Two wrongs can make a right (and are difficult to fix)](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_86) 72 | Underscores the complexity of software development where two mistakes might cancel each other out, making them harder to identify and fix. It highlights the importance of thorough testing and documentation to prevent and resolve such issues effectively. 73 | 74 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 75 | 76 | #### Alignment with Development Processes 77 | 78 | * Are we updating documentation as part of our routine development tasks? 79 | * Are documentation updates included in our definition of done? 80 | * Are documentation updates included in our checklists? 81 | 82 | #### Automation Strategies 83 | 84 | * To what extent have we explored and implemented automation in our documentation processes? 85 | * Are there areas where automation could significantly improve consistency and efficiency? 86 | 87 | #### Audience Awareness 88 | 89 | * Do we clearly understand the different audiences for our documentation? 90 | * How are we tailoring documentation to meet the diverse needs of developers, end-users, and stakeholders? 91 | 92 | #### Cultural Adoption 93 | 94 | * Is the importance of documentation consistency recognized and valued across our team? 95 | * What steps can we take to foster a culture that prioritizes documentation alongside development? 96 | 97 | #### Addressing Documentation Decay 98 | 99 | * How do we monitor and address documentation decay? 100 | * What processes do we have in place to ensure documentation remains accurate over time? 101 | 102 | #### Preventing Knowledge Silos 103 | 104 | * What measures are we taking to prevent knowledge silos related to documentation practices? 105 | * How can we ensure that all team members are equipped to contribute to and update documentation? 106 | 107 | ## Supporting Capabilities 108 | 109 | ### [Version Control](https://dora.dev/devops-capabilities/technical/version-control/) 110 | Documentation Consistency Check enhances the use of version control by ensuring that documentation revisions are tracked alongside code changes. This alignment facilitates clear understanding of changes over time, supporting collaborative development and historical review. 111 | 112 | ### [Monitoring and Observability](https://dora.dev/devops-capabilities/technical/monitoring-and-observability/) 113 | By ensuring that documentation accurately reflects the system's architecture and behavior, Check Documentation Consistency enables more effective monitoring and observability. Accurate documentation provides a critical reference for understanding observed behaviors and diagnosing issues. 114 | 115 | ### [Documentation Quality](https://dora.dev/devops-capabilities/process/documentation-quality/) 116 | Documentation Consistency Check directly supports the enhancement of documentation quality by ensuring that the information is current, accurate, and aligned with the software and its development practices, thereby contributing to overall process improvement. 117 | -------------------------------------------------------------------------------- /practices/clean-git-history.md: -------------------------------------------------------------------------------- 1 | # Clean Git History 2 | 3 | The Clean Git History practice favors small, focused commits to aid navigation, search, and efficient code reviews. Each commit should be releasable, maintaining the codebase in a deployable state to minimize instability. On shared branches, commits are treated as immutable to avoid nasty conflicts. Adding context information, such as ticket or story references, to commits helps developers understand the requirements and business reasons behind changes. 4 | 5 | Small commits are particularly beneficial for debugging. They make it easier to pinpoint the exact commit causing an issue and, due to their limited scope, help identify the specific line of code responsible. Large commits complicate the process of locating the precise error due to their size. Thus, maintaining small, well-documented commits ensures a transparent and manageable git history. 6 | 7 | ## Nuance 8 | 9 | ### Mindful Contextual Comments 10 | 11 | While adding context information like ticket or story references is valuable, developers should avoid verbose or irrelevant comments within commit messages. 12 | Clear and concise contextual information enhances understanding, but excessive or redundant comments can clutter the history and distract from the essential changes. 13 | 14 | ### Striking a Balance in Commit Frequency 15 | 16 | Developers should aim for a balance, ensuring commits are neither too sparse (to capture meaningful progress) nor too frequent (to avoid cluttering the commit history). 17 | If developers wait to commit until their entire feature is complete (assuming it's not a minor change), then they're likely not committing frequently enough. 18 | When following the red/green/refactor flow from the [test-driven design](/practices/implement-test-driven-design.md) practice, then it's prudent to aim to commit every couple of red/green/refactor cycles. 19 | 20 | ### Collaborative Commit Practices 21 | 22 | In collaborative environments, it's crucial to establish clear guidelines for committing changes to avoid conflicts and maintain consistency. 23 | This includes agreeing on commit message formats, branch naming conventions, and the use of rebasing or merging strategies. 24 | Overlooking these practices can lead to confusion and inefficiencies in the development process. 25 | 26 | ### Educating Team Members 27 | 28 | Adopting clean Git history practices requires buy-in from all team members and ongoing education to ensure adherence. 29 | It's important to communicate the benefits of small, focused commits and provide training on effective commit strategies to maintain consistency across the development team. 30 | This education should include familiarizing developers with tools like `git log --grep="ticket-number"` to group and find commits related to specific tickets, and `git bisect` to identify the commit that introduced a bug. 31 | Training on these tools ensures developers can manage and navigate Git history efficiently, creating a culture of best practices and continual improvement. 32 | 33 | ### Use Automation 34 | 35 | While finding the right commit size should always be a judgement call, it may make sense to introduce some automation to ensure commit messages comply with agreed upon standards. [Git Hooks](https://git-scm.com/book/en/v2/Customizing-Git-An-Example-Git-Enforced-Policy#_an_example_git_enforced_policy) are one way to automate that enforcement. 36 | 37 | ## How to Improve 38 | 39 | ### [Lead A Demonstration](/practices/lead-a-demonstration.md) 40 | 41 | #### Git Bisect Debugging 42 | 43 | Introduce the git bisect tool and demonstrate its usage in identifying problematic commits. 44 | Set up a mock scenario where a bug is introduced in the codebase, and have team members use git bisect to pinpoint the exact commit causing the issue. 45 | 46 | ### [Lead Workshops](/practices/lead-workshops.md) 47 | 48 | #### Commit Frequency Audit 49 | 50 | Conduct an audit of recent commits to assess their frequency and relevance. 51 | Identify instances of both too frequent and too sparse commits. 52 | Based on this analysis, develop guidelines for when to commit changes, aiming for logical breakpoints or completion of significant functionality. 53 | Discuss as a team and adjust practices accordingly. 54 | 55 | ### [Start A Book Club](/practices/start-a-book-club.md) 56 | 57 | ### [Keep a Clean Git History](https://simplabs.com/blog/2021/05/26/keeping-a-clean-git-history/) 58 | Offers guidance on maintaining a clean Git commit history, emphasizing practices like squashing similar commits, crafting clear commit messages, and organizing changes logically to make the project's history navigable and understandable, crucial for effective code reviews and project oversight. 59 | 60 | ### [Staying Disciplined with Your Git History](https://8thlight.com/blog/makis-otman/2015/07/08/git-disciplined.html) 61 | Advocates for the disciplined management of Git history through methods like feature branching, minimizing the size of commits, and keeping branches updated via regular rebasing. Highlights the benefits of these practices for enhancing collaboration, facilitating project tracking, and simplifying code reversions. 62 | 63 | ### [Two Wrongs Can Make a Right (And Are Difficult to Fix)](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_86) 64 | Details strategies for properly amending Git history issues, such as errant commits or merge mistakes, without exacerbating problems. Includes practical advice and Git command examples for correcting history efficiently and effectively, focusing on common Git missteps and the complexities of rectifying them. 65 | 66 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 67 | 68 | #### Commit Message Clarity and Relevance 69 | 70 | * Are our commit messages providing clear and relevant context, or do they risk becoming verbose or tangential? 71 | * How can we ensure that our commit messages strike the right balance between providing necessary context and avoiding unnecessary clutter? 72 | 73 | #### Collaborative Commit Practices 74 | 75 | * Do we have clear guidelines in place for committing changes in our collaborative environment? 76 | * Are we consistently following agreed-upon commit message formats, branch naming conventions, and merging strategies? 77 | 78 | #### Educating Team Members 79 | 80 | * Have we effectively communicated the benefits of clean Git history practices to all team members? 81 | * Are our team members equipped with the necessary training and tools to navigate Git history effectively? 82 | 83 | ## Supporting Capabilities 84 | 85 | ### [Version Control](/capabilities/version-control.md) 86 | 87 | A clean Git history is fundamental to effective version control, enabling precise tracking, easier code reviews, and better management of project codebases. 88 | -------------------------------------------------------------------------------- /practices/clean-tests.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/clean-tests.md -------------------------------------------------------------------------------- /practices/conduct-incident-reviews.md: -------------------------------------------------------------------------------- 1 | ## Resources 2 | 3 | [Incident Review and Postmortem Best Practices](https://newsletter.pragmaticengineer.com/p/incident-review-best-practices) -------------------------------------------------------------------------------- /practices/conduct-retrospective-meetings.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/conduct-retrospective-meetings.md -------------------------------------------------------------------------------- /practices/create-and-manage-ephemeral-environments.md: -------------------------------------------------------------------------------- 1 | # Create and Manage Ephemeral Environments 2 | 3 | Creating and managing ephemeral environments involves setting up temporary, production-like environments that mimic the live system. 4 | These environments are transient, existing only for the duration needed for testing, development, or other purposes. 5 | If done well, ephemeral environments can make manual and automated testing in an integrated environment simple and reliable. 6 | Ephemeral environments can be automated using Infrastructure as Code (IaC), updated via Continuous Integration/Continuous Deployment (CI/CD) pipelines, and can sometimes handle copies of production traffic to assess performance and reliability against real-world conditions. 7 | 8 | ## Nuance 9 | 10 | ### Achieving Full Production Parity 11 | 12 | One common misconception about ephemeral environments is the belief that they should perfectly replicate production conditions, but achieving this level of parity can be prohibitively expensive for many companies. 13 | Instead, the primary goal of ephemeral environments is to make it easy and convenient for developers to spin up a simulated environment on-demand, allowing them to focus on getting their work done efficiently without worrying about the cost or complexity of maintaining a near replica of production. 14 | Each company will have its own unique balance between cost and value when it comes to using ephemeral environments. 15 | 16 | ### Security and Data Privacy 17 | 18 | Security and data privacy are important considerations when implementing ephemeral environments. 19 | Handling sensitive data in transient setups requires careful planning to ensure compliance with regulatory requirements and safeguard against unauthorized access. 20 | Only populate ephemeral environments with sensitive or valuable data if/when required. 21 | Robust access controls, monitoring, and logging mechanisms should be in place to detect and respond to security incidents promptly. 22 | This ensures that despite the temporary nature of ephemeral environments, data integrity and confidentiality are maintained at all times. 23 | 24 | ### Application Complexity and Compatibility 25 | 26 | Legacy systems or applications with intricate dependencies may face challenges in ephemeral deployment due to compatibility issues or operational constraints. 27 | In such cases, adopting a hybrid approach—integrating both ephemeral and persistent environments strategically—can help strike a balance between flexibility and compliance. 28 | 29 | ## How to Improve 30 | 31 | ### [Start A Book Club](/practices/start-a-book-club.md) 32 | 33 | #### [What is an ephemeral environment?](https://webapp.io/blog/what-is-an-ephemeral-environment/) 34 | 35 | This article goes through the basics of ephemeral environments. It's a great resource for those new to the concept. 36 | 37 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 38 | 39 | * In what ways do you anticipate ephemeral environments will simplify troubleshooting and debugging for your team? 40 | * Are most bugs easily reproducible? 41 | * Are there a lot of "it works on my machine" issues? 42 | * Might ephemeral environments save resources compared to long-lived testing environments? 43 | * How familiar is your team with using Infrastructure as Code (IaC) for automating environments? 44 | * How do you envision integrating ephemeral environments into your CI/CD pipelines? 45 | * How will you ensure the security and privacy of sensitive data within ephemeral environments? 46 | * Which applications or systems might face challenges with ephemeral deployment, and why? 47 | * Does adopting a hybrid approach that combines ephemeral and persistent environments make sense for our use case? 48 | * What initial steps will you take to start adopting ephemeral environments in your organization? 49 | * How prepared is your team to adopt and manage ephemeral environments, and what training or resources will be necessary? 50 | * Should we conduct pilot tests of ephemeral environments before full-scale implementation, and what criteria will you use to evaluate their success? 51 | 52 | ## Supporting Capabilities 53 | 54 | ### [Continuous Integration](/capabilities/continuous-integration.md) 55 | 56 | Ephemeral environments enable developers to easily integrate and verify their code changes into a running environment that nothing else depends on. This reduces the risk of integrating broken changes, promoting a higher quantity of integrations. 57 | 58 | ### [Test Automation](/capabilities/test-automation.md) 59 | 60 | Ephemeral environments make it easier to support automated tests that need to run in realistic / integrated environments without affecting any shared environments that other developers or end users depend on. For example, load testing is great, but sometimes they can affect the performance of the system under test. If multiple teams or upstream systems depend on that degraded system, unexpected issues may result. 61 | 62 | ### [Test Data Management](/capabilities/test-data-management.md) 63 | 64 | For ephemeral environments to be effective, it's important for a good test data management system to be in place. 65 | 66 | ### [Version Control](/capabilities/version-control.md) 67 | 68 | If an ephemeral environment's infrastructure, code, configuration, and data is under version control, then it will increase the usefulness of the environment. When developers can pull different versions off of the shelf to suit their needs, then it becomes easier to reproduce issues that only occur when a specific combination of infrastructure, code, configuration, and data exists. 69 | -------------------------------------------------------------------------------- /practices/decouple-from-third-parties.md: -------------------------------------------------------------------------------- 1 | # Decouple from Third Parties 2 | 3 | Minimize reliance on third-party software like platforms, frameworks, libraries, and external APIs by using patterns such as interfaces or shims. This shields the core system from changes in third-party tools, allowing seamless switching between implementations or versions. 4 | 5 | This practice also enables mock or stub implementations for testing without external dependencies. Decoupling code improves portability across platforms and provides flexibility to migrate to alternative solutions when needed. 6 | 7 | ## Nuance 8 | 9 | ### Balancing Decoupling with Pragmatism 10 | 11 | While decoupling is beneficial, overdoing it can lead to unnecessary complexity and abstraction. Be on the lookout for 3rd party software that has a high surface area with your codebase. For example, if you are using an ORM to fetch data, you likely don't want to pass those ORM objects all around your codebase. Doing so would make upgrading to future versions or moving to a new ORM extremely painful. If instead you're using a JSON serializer in a couple of places, it's probably overkill to "hide" that dependency as upgrading or replacing it would be fairly straightforward. Find the right balance between decoupling and practicality based on the specific requirements of your project. 12 | 13 | Some organizations make long-term agreements with 3rd party systems that significantly reduce the cost of operating their system. For example, Google Cloud signed some very client favorable deals when they were trying to take market share away from AWS. For those organizations that signed long-term and favorable deals, it likely wasn't as important to build their systems in ways that avoided vendor lock-in. 14 | 15 | ### Testing Strategies 16 | 17 | Decoupling facilitates easier testing by using test doubles, such as mocks and stubs. However, it's crucial to keep these test doubles straightforward to prevent divergence from the real system's behavior. Overly complex test doubles can lead to false confidence in test results, as they may not accurately represent actual system interactions. 18 | 19 | Rather than creating complex scenarios with test doubles, consider enhancing test reliability by testing actual interactions. Move towards higher levels of the testing pyramid where integration and end-to-end tests validate real system behaviors, providing more confidence in the robustness of your software. 20 | 21 | Every situation is unique, so there's no one size fits all guidance for this situation. Be aware of the trade-offs you're making and use your head. 22 | 23 | ## How to Improve 24 | 25 | ### [Do A Spike](/practices/do-a-spike.md) 26 | 27 | Choose am important dependency and refactor your code to introduce abstractions such as interfaces or abstract classes to encapsulate interactions with that dependency. 28 | Rewrite the implementations to depend on these abstractions rather than the concrete third-party tools. 29 | 30 | ### [Lead Workshops](/practices/lead-workshops.md) 31 | 32 | Start by identifying the dependencies your project currently has on third-party software, frameworks, or libraries. Make a list of these dependencies and assess how tightly coupled they are to your codebase. 33 | 34 | ### [Start A Book Club](/practices/start-a-book-club.md) 35 | 36 | - [Clean Architecture Article](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html) 37 | 38 | The article delves into various architectural methodologies such as Hexagonal Architecture, Onion Architecture, Screaming Architecture, DCI, and BCE, with a focus on principles like framework independence, testability, and concern separation. 39 | It introduces "The Clean Architecture," centered on the Dependency Rule, depicted by concentric circles signifying different software domains and their corresponding responsibilities. 40 | Adhering to the Dependency Rule promotes high cohesion and low coupling. When managing dependencies with third parties, Clean Architecture provides a structured approach by encapsulating external dependencies within outer layers, effectively isolating them from core business logic. 41 | 42 | - [DIP in the Wild](https://martinfowler.com/articles/dipInTheWild.html) 43 | 44 | This article discusses the Dependency Inversion Principle (DIP) in software design and its application in managing dependencies with third parties. It illustrates various scenarios where the DIP can be useful, such as simplifying complex APIs, aligning library abstractions with domain concepts, rejecting external constraints, and controlling time-related dependencies. 45 | 46 | - [That's Not Yours](https://8thlight.com/insights/thats-not-yours) 47 | 48 | The article explores the pitfalls and benefits of using mock objects in test-driven development (TDD), emphasizing the principle of "Don't Mock What You Don't Own." 49 | The author discusses how improper use of mocks can lead to unreliable tests and proposes alternatives, such as wrapping third-party libraries in domain-specific objects. 50 | 51 | ### [Host A Viewing Party](/practices/host-a-viewing-party.md) 52 | 53 | - [Boundaries](https://www.destroyallsoftware.com/talks/boundaries) 54 | 55 | This presentation delves into the concept of using simple values rather than complex objects as the boundaries between components and subsystems in software development. It covers various topics such as functional programming, the relationship between mutability and object-oriented programming (OO), isolated unit testing with and without test doubles, and concurrency. Understanding and implementing these concepts can be immensely beneficial in managing dependencies with third parties. 56 | 57 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 58 | 59 | * What are the key third-party dependencies we rely on in our projects? 60 | * Have we identified any single points of failure or critical dependencies on specific third-party tools? 61 | * Have there been instances where changes or updates to third-party tools have caused unexpected issues or disruptions in our projects? 62 | * What are the potential risks and drawbacks of maintaining high levels of dependency on third-party tools in the long term? 63 | * What steps can we take to future-proof our projects and mitigate risks associated with changes or discontinuation of third-party tools? 64 | 65 | ## Supporting Capabilities 66 | 67 | ### [Code Maintainability](capabilities/tech/code-maintainability.md) 68 | 69 | The Decouple from Third Parties practice significantly supports the Code Maintainability capability by advocating for the minimization of dependencies on third-party software, thereby ensuring that code remains adaptable and easy to maintain over time. By abstracting dependencies behind interfaces and relying on abstractions like interfaces instead of specific third-party tools, teams can enhance the portability of their code, facilitate comprehensive testing through the creation of mock or stub implementations, and enable flexibility in migration to alternative solutions if necessary. 70 | 71 | ### [Test Automation](https://dora.dev/devops-capabilities/technical/test-automation/) 72 | 73 | Decouple from Third Parties supports the Test Automation capability by advocating minimal dependency on third-party software, 74 | enabling teams to create and maintain fast, deterministic automated tests. 75 | By abstracting dependencies behind interfaces and relying on abstractions like interfaces, teams can enhance the portability of their code and facilitate testing. 76 | 77 | ### [Loosely Coupled Architecture](https://dora.dev/devops-capabilities/process/loosely-coupled-architecture/) 78 | 79 | This practice is linked to the Loosely Coupled DORA Capability by emphasizing the reduction of dependencies on external systems or services within the architectural design. 80 | By decoupling from third-party dependencies, teams can achieve greater autonomy and flexibility in their software development processes. 81 | This practice enables teams to make large-scale changes to their systems without external permissions or coordination, complete work without extensive communication with external entities, and deploy their products or services independently of external dependencies. 82 | -------------------------------------------------------------------------------- /practices/design-for-eventual-consistency.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/design-for-eventual-consistency.md -------------------------------------------------------------------------------- /practices/hold-environment-information-separately.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/hold-environment-information-separately.md -------------------------------------------------------------------------------- /practices/host-a-roundtable-discussion.md: -------------------------------------------------------------------------------- 1 | # Host A Roundtable Discussion 2 | 3 | Under Construction 4 | 5 | 14 | -------------------------------------------------------------------------------- /practices/host-crucial-conversation.md: -------------------------------------------------------------------------------- 1 | # Host a Crucial Conversation 2 | 3 | By hosting a crucial conversation, you're helping team members with differing (sometimes strong) opinions navigate high-stakes and sensitive discussions. You're also helping to strengthen relationships and build trust within your team. 4 | The key to hosting a crucial conversation is creating a space of psychological safety, where participants can speak openly and without fear of judgment or conflict. This opens the door to constructive feedback, civil conflict resolution, and collaborative decision-making where team members are working toward a common goal. 5 | For a team of software developers, a crucial conversation might center around deciding whether to refactor a legacy system or invest in building a new one. This discussion could involve balancing the technical debt, budget constraints, and the potential impact on delivery timelines. Another example might involve debating the adoption of test-driven development (TDD) as a standard practice, weighing its potential to improve code quality against concerns about increased development time. 6 | With the help of a host, these difficult discussions can be turned into opportunities for growth. 7 | 8 | ## Nuances 9 | 10 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this practice. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the practice with your teams. 11 | 12 | ### Fool's Choice 13 | 14 | The fool's choice arises when, during a difficult conversation, people think they must choose between being honest and preserving a relationship. They fear that speaking openly will cause harm or conflict. But the fool's choice is a false dilemma. It's also counterproductive, typically leading to silence or aggression and damaging trust. A _third option_ exists: addressing issues respectfully while maintaining the relationship. As host, remember this third option and focus on articulating shared goals and fostering a safe environment. 15 | 16 | ### You have to keep an open mind 17 | 18 | It's important to enter a crucial conversation with a genuine curiosity about the other person's perspective. Without this, participants limit their ability to truly understand the underlying issues and unique viewpoints. An open mind allows them to consider different angles, question assumptions, and explore the conversation more deeply. By encouraging participants to remain curious, you create a more constructive environment for dialogue, fostering empathy and collaboration. You're also helping your team find common ground and work toward a resolution that benefits all parties to some degree. 19 | 20 | ### Stay Focused 21 | 22 | During a crucial conversation, it can be challenging to maintain a single point of focus, especially if the discussion becomes uncomfortable. In these moments, individuals may bring up unrelated issues to divert attention from the primary topic, which can derail the conversation and lead to confusion. As a host, _bookmarking_ is a crucial strategy to employ here — this involves consciously noting unrelated issues for later discussion, so they don’t distract from the conversation at hand. 23 | 24 | ### Full Consensus May Not Be Feasible 25 | 26 | A common misconception is that all decisions — especially those made during crucial conversations — should be made by consensus. But the belief that full agreement is always necessary can lead to compromises and delays. In urgent situations, or when decisions affect a larger group, full consensus may not be feasible. Here are some alternatives to consensus that offer quicker and more flexible decision-making: 27 | 28 | * Command: Decisions are made without involving others. 29 | * Consultation: Input is gathered from stakeholders and a smaller group makes the final decision. 30 | * Voting: Decisions are made based on a majority agreement. 31 | 32 | ## Gaining Traction 33 | 34 | The following actions will help your team implement this practice. 35 | 36 | ### [Lead Workshops](/practices/lead-workshops.md) 37 | 38 | #### Host a mock crucial conversation 39 | 40 | Hosting a mock crucial conversation involves your team role-playing in a challenging conversation to practice managing emotions and communicating effectively. Begin by identifying a shared purpose. What is the focus of the conversation and what is the common goal that all participants are working toward? One group then simulates scenarios like giving feedback or resolving conflicts, while another group observes and critiques. Afterward, the entire group reflects on the experience to improve future real-life conversations. As host, it's helpful to use the Crucial Conversations [worksheet](https://irp-cdn.multiscreensite.com/25ad169b/files/uploaded/Crucial-Conversations-Worksheet.pdf) to guide the mock conversation, ensuring that key strategies and goals are addressed throughout the exercise. 41 | 42 | Hosting a mock crucial conversation will help your team build skills such as active listening, staying calm under pressure, and navigating sensitive or high-stakes issues. 43 | 44 | ### [Start a Book Club](/practices/start-a-book-club.md) 45 | 46 | #### [Crucial Conversations: Tools for Talking When Stakes are High](https://www.goodreads.com/book/show/15014.Crucial_Conversations) 47 | 48 | The authors of _Crucial Conversations_ teach you how to navigate high-stakes situations where emotions run high and opinions differ. They offer practical tools to handle tough conversations, communicate clearly, and achieve positive outcomes. The techniques discussed in this book will help you quickly prepare for tough discussions, create a safe environment for open dialogue, be persuasive without being aggressive, and stay engaged even when others become defensive or silent. 49 | 50 | ### [Host A Viewing Party](/practices/host-a-viewing-party.md) 51 | 52 | #### [Mastering Crucial Conversations by Joseph Grenny](https://www.youtube.com/watch?v=uc3ARpccRwQ) 53 | 54 | In his talk "Mastering the Art of Crucial Conversations," Joseph Grenny (co-author of _Crucial Conversations_) outlines key strategies for navigating high-stakes discussions with confidence. He emphasizes the importance of creating a safe environment and offers techniques for staying calm under pressure, managing emotions, and keeping the conversation productive. He also stresses the need to focus on shared goals and mutual respect. 55 | 56 | ## Adjacent Capabilities 57 | This practice supports enhanced performance in the following capabilities. 58 | 59 | ### [Generative organizational culture](/capabilities/generative-organizational-culture.md) 60 | 61 | Hosting a crucial conversation supports the Generative Organizational Culture capability by promoting open communication, trust, and collaboration. It encourages team members to share ideas, challenge assumptions, and provide constructive feedback in a safe environment. This openness helps align teams, promotes continuous learning, and drives innovation — key elements of a generative culture. 62 | 63 | ### [Well-being](/capabilities/well-being.md) 64 | 65 | Hosting a crucial conversation supports the Well-being capability by providing a safe space to address issues and resolve conflicts constructively. This practice helps reduce stress and burnout, promotes a healthier work environment, and ensures team members feel heard and valued. The result is enhanced overall mental and emotional well-being. -------------------------------------------------------------------------------- /practices/implement-a-documentation-search-engine.md: -------------------------------------------------------------------------------- 1 | # Implement A Documentation Search Engine 2 | 3 | Under Construction 4 | 5 | -------------------------------------------------------------------------------- /practices/implement-actor-based-model.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-actor-based-model.md -------------------------------------------------------------------------------- /practices/implement-anti-entropy-patterns.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-anti-entropy-patterns.md -------------------------------------------------------------------------------- /practices/implement-bulkheads.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-bulkheads.md -------------------------------------------------------------------------------- /practices/implement-cascading-failure-mitigation-strategies.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-cascading-failure-mitigation-strategies.md -------------------------------------------------------------------------------- /practices/implement-circuit-breaker-pattern.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-circuit-breaker-pattern.md -------------------------------------------------------------------------------- /practices/implement-composable-design.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-composable-design.md -------------------------------------------------------------------------------- /practices/implement-distributed-tracing.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-distributed-tracing.md -------------------------------------------------------------------------------- /practices/implement-domain-driven-design.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-domain-driven-design.md -------------------------------------------------------------------------------- /practices/implement-elastic-systems.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-elastic-systems.md -------------------------------------------------------------------------------- /practices/implement-event-driven-architecture.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-event-driven-architecture.md -------------------------------------------------------------------------------- /practices/implement-feature-flags.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-feature-flags.md -------------------------------------------------------------------------------- /practices/implement-form-object-pattern.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-form-object-pattern.md -------------------------------------------------------------------------------- /practices/implement-graceful-degradation-and-fallbacks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-graceful-degradation-and-fallbacks.md -------------------------------------------------------------------------------- /practices/implement-health-checks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-health-checks.md -------------------------------------------------------------------------------- /practices/implement-load-balancing.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-load-balancing.md -------------------------------------------------------------------------------- /practices/implement-logging.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-logging.md -------------------------------------------------------------------------------- /practices/implement-message-driven-systems.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-message-driven-systems.md -------------------------------------------------------------------------------- /practices/implement-microservice-architecture.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-microservice-architecture.md -------------------------------------------------------------------------------- /practices/implement-monitoring-metrics.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-monitoring-metrics.md -------------------------------------------------------------------------------- /practices/implement-plugin-architecture.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-plugin-architecture.md -------------------------------------------------------------------------------- /practices/implement-repository-pattern.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-repository-pattern.md -------------------------------------------------------------------------------- /practices/implement-stability-patterns.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-stability-patterns.md -------------------------------------------------------------------------------- /practices/implement-timeouts-and-retries.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/implement-timeouts-and-retries.md -------------------------------------------------------------------------------- /practices/incremental-development.md: -------------------------------------------------------------------------------- 1 | # Incremental Development 2 | 3 | 4 | 5 | ## Nuance 6 | 7 | 8 | 9 | ## Introspective Questions 10 | 11 | 12 | 13 | ## How to Improve 14 | 15 | ### [Lead A Demonstration](/practices/lead-a-demonstration.md) 16 | 17 | ### [Run Pair Programming Sessions](/practices/run-pair-programming-sessions.md) 18 | 19 | ### [Lead Workshops](/practices/lead-workshops.md) 20 | 21 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 22 | 23 | ### [Start A Book Club](/practices/start-a-book-club.md) 24 | 25 | ### [Host A Viewing Party](/practices/host-a-viewing-party.md) 26 | 27 | ### [Do A Spike](/practices/do-a-spike.md) 28 | 29 | ### [Host A Retrospective](/practices/host-a-retrospective.md) 30 | 31 | ### [Talk Directly With Users](/practices/talk-directly-with-users.md) 32 | 33 | ### [Dogfood Your Systems](/practices/dogfood-your-systems.md) 34 | 35 | ### [Start A Community Of Practice](/practices/start-a-community-of-practice.md) 36 | 37 | ## Resources 38 | 39 | 40 | 41 | ## Related Practices 42 | 43 | 44 | 45 | ## Supporting Capabilities 46 | 47 | 48 | -------------------------------------------------------------------------------- /practices/lead-a-demonstration.md: -------------------------------------------------------------------------------- 1 | # Lead A Demonstration 2 | 3 | Leading a demonstration is an excellent way to engage an audience by live coding, showcasing customer feedback, or presenting on a specific topic. Demonstrations can vary in length, from brief lightning talks to more extended sessions, and can be conducted as one-off events or as part of a regular series. This format is particularly effective for introducing new topics or ideas, as it allows the presenter to provide a clear and focused explanation. Including a Q&A session at the end or during the demonstration can enhance understanding and provide immediate clarification. However, this approach is less effective for spurring in-depth discussions, as the primary focus is on delivering content rather than fostering interactive dialogue. 4 | 5 | ## Nuance 6 | 7 | ### Don't Cover Too Much Ground 8 | 9 | Stay focused on one or two main points to avoid overwhelming your audience. Providing a deep dive into specific topics makes your content more digestible and allows for meaningful audience interaction. 10 | 11 | ### Anyone Can Do It 12 | 13 | Anyone, regardless of experience level, can lead a demonstration. Preparation and passion for the subject are key. Sharing your unique perspective can create a meaningful and impactful presentation. Encourage all developers to lead presentations. 14 | 15 | ### Not Great For Tacit Knowledge Transfer 16 | 17 | Demonstrations are excellent for explicit knowledge but less effective for tacit knowledge, which requires interactive and collaborative approaches like workshops or mentoring for comprehensive understanding and skill development. 18 | 19 | ### Great For Stakeholders And Collaborators Alike 20 | 21 | An often overlooked aspect of building great software is building trust with your stakeholders and collaborators. Leading demonstrations can do wonders towards that end. 22 | 23 | ## Introspective Questions 24 | 25 | - Do I have any context or points of view that others don't have? 26 | - How can I craft a demonstration to be thought-provoking and engaging? 27 | - Are there little touches I can add to the experience for the attendees? 28 | 29 | ## Resources 30 | 31 | ### [A Demo Is A Performance](https://blog.squirrelington.ninja/blog/a-demo-is-a-performance/) 32 | 33 | Treat software demos like theatrical performances, focusing on presentation and engagement. 34 | Preparation is crucial; rehearse, test equipment, and create a welcoming environment. 35 | Tailor the demo to the stakeholders’ preferences and comfort. 36 | Be intentional with every aspect, including appearance, setting, and first impressions. 37 | Gather feedback and refine future demos for better effectiveness and stakeholder satisfaction. 38 | 39 | ## Related Practices 40 | 41 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 42 | 43 | If your Q&A session runs long, you may want to schedule a roundtable discussion on the topcis your demonstration covered. That way attendees get more chances to play around with the ideas in their heads. 44 | 45 | ### [Lead Workshops](/practices/lead-workshops.md) 46 | 47 | It may be a good idea to cover your topic with a workshop following a presentation / demonstration. Doing so will increase the chances that your ideas will be applied by the people you're trying to impart knowledge onto. 48 | 49 | ### [Host A Viewing Party](/practices/host-a-viewing-party.md) 50 | 51 | If you or someone else has already recorded a demonstration that adequately covers the topics you wish to share, you can run a watch party instead of giving the presentation. 52 | 53 | ### [Start A Community Of Practice](/practices/start-a-community-of-practice.md) 54 | 55 | Finding or starting relevant Communities of Practice can be a great place to lead a demonstration. The audience has already self selected themselves to join with the hopes of learning. 56 | 57 | ## Supporting Capabilities 58 | 59 | ### [Learning Culture](/capabilities/learning-culture.md) 60 | 61 | Organizations that host demonstrations / presentations tend to value learning as a part of their culture. 62 | -------------------------------------------------------------------------------- /practices/optimize-data-structures.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/optimize-data-structures.md -------------------------------------------------------------------------------- /practices/perform-static-code-analysis.md: -------------------------------------------------------------------------------- 1 | # Perform Static Code Analysis 2 | 3 | Performing static code analysis involves using automated tools to review and scan the codebase for potential issues, ensuring adherence to quality standards and best practices. 4 | These tools help detect issues early in development, integrating with version control systems, IDEs, and CI/CD pipelines to enhance productivity. 5 | Static code analysis is valuable for spotting code smells, basic security vulnerabilities, performance bottlenecks, and analyzing dependencies for better modularity. 6 | 7 | ## Nuance 8 | 9 | ### Common Misconceptions about Static Code Analysis 10 | 11 | A common misconception is that static code analysis can catch all possible issues in a codebase. 12 | While these tools are powerful for identifying code smells, basic security vulnerabilities, and performance bottlenecks, they are not foolproof. 13 | They may miss more nuanced or context-specific problems, and sometimes flag good code as problematic. 14 | Developers should not solely rely on these tools but use them as part of a broader quality assurance strategy. 15 | 16 | ### Importance of Developer Judgment 17 | 18 | While static code analysis tools are helpful, they should not replace developer judgment. 19 | These tools can highlight potential issues, but it is up to the developers to make the final call on whether a flagged issue is truly problematic. 20 | Blindly following the tool's recommendations can lead to unnecessary code changes and reduce overall productivity. 21 | The ability to override automated checks ensures that the development process remains flexible and pragmatic. 22 | 23 | ### Impact on Code Reviews 24 | 25 | Relying too heavily on static code analysis might lead to a reduction in code reviews. 26 | Automated tools should complement, not replace, human reviews, which are essential for catching context-specific issues and providing valuable feedback on code design and architecture. 27 | Ensuring that manual code reviews remain a part of the development process is vital for maintaining high code quality. 28 | 29 | ## How to Improve 30 | 31 | ### [Do A Spike](/practices/do-a-spike.md) 32 | 33 | #### Tool Selection and Initial Setup 34 | 35 | Identify and set up a static code analysis tool that fits your team's needs. 36 | Research various static code analysis tools, such as SonarQube or CodeClimate, and compare their features. 37 | Select one or two tools that seem promising and run them on a small project or segment of your codebase. 38 | Integrate the chosen tool with your version control system and IDE. 39 | Review the initial set of issues identified to understand the tool's strengths and weaknesses, and determine which tool aligns best with your workflow. 40 | 41 | ### [Lead Workshops](/practices/lead-workshops.md) 42 | 43 | #### Dependency and Modularity Analysis 44 | 45 | Use static code analysis tools to evaluate and improve module dependencies. 46 | Run a dependency analysis on your current codebase and document areas with high coupling and poor cohesion. 47 | Based on the analysis, refactor parts of the codebase to improve modularity. 48 | Run the dependency analysis again to measure improvements. 49 | 50 | ### [Start A Book Club](/practices/start-a-book-club.md) 51 | 52 | #### [Automate Your Coding Standard](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_04) 53 | 54 | This resource provides insights into the importance of automating coding standards to maintain code quality and consistency. 55 | It highlights how automated tools can help enforce coding conventions, making the codebase more manageable and the development process more efficient. 56 | 57 | #### [Design structure matrix](https://en.wikipedia.org/wiki/Design_structure_matrix) 58 | 59 | The Design Structure Matrix (DSM) is a visual tool used in systems engineering and project management to represent the interactions and dependencies within complex systems or processes in a compact, square matrix format. 60 | Originating in the 1960s, DSMs gained popularity in the 1990s across various industries and government agencies. 61 | They can model both static systems, where elements coexist simultaneously, and time-based systems, which reflect processes over time. 62 | DSMs are advantageous for highlighting patterns, managing changes, and optimizing system structures. 63 | They utilize algorithms for reordering elements to minimize feedback loops and can be extended to multiple domain matrices to visualize interactions across different domains, enhancing information flow and office work optimization. 64 | 65 | #### [Two Wrongs Can Make a Right (and Are Difficult to Fix)](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_86) 66 | 67 | The article "Two Wrongs Can Make a Right (and Are Difficult to Fix)" by Allan Kelly highlights the complex nature of software bugs, particularly when two defects interact to create a single visible fault. This interplay can lead developers to repeatedly attempt fixes that fail because they only address part of the problem. Such scenarios demonstrate the importance of comprehensive error detection and resolution strategies. This concept supports the Perform Static Code Analysis Practice by underscoring the limitations of relying solely on automated tools to catch all issues. While static code analysis can identify many potential problems, it may miss nuanced or context-specific defects, especially those involving multiple interacting errors. 68 | 69 | #### [The power of feedback loops](https://lucamezzalira.medium.com/the-power-of-feedback-loops-f8e27e8ac25f) 70 | 71 | Luca Mezzalira's article 'The Power of Feedback Loops' underscores how iterative feedback enhances processes, resonating with the practice of Perform Static Code Analysis. 72 | Like feedback loops in development cycles, static code analysis tools automate early detection of issues such as code smells and security vulnerabilities, aligning with Mezzalira's advocacy for leveraging feedback to maintain high standards while emphasizing the need for developer judgment and human oversight in software quality assurance. 73 | 74 | ### [Host A Viewing Party](/practices/host-a-viewing-party.md) 75 | 76 | #### [System architecture as network data](https://vimeo.com/241241654) 77 | 78 | The speaker emphasizes the importance of loose coupling and high cohesion in software architecture to reduce dependencies between modules, thereby minimizing meetings and coordination overhead. 79 | They demonstrate how to use tools like Line Topology, Cytoscape, and Jupyter Notebooks to analyze and visualize code dependencies, enabling automated detection of modularity and cohesion in the system. 80 | By using network science and computational techniques, the speaker argues for the value of objective metrics in assessing and improving code modularity, drawing parallels to social networks and using examples like Game of Thrones character interactions to illustrate their points. 81 | 82 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 83 | 84 | #### Understanding and Usage 85 | 86 | * How well do we understand the capabilities and limitations of our static code analysis tools? 87 | * Are we using static code analysis tools to their full potential within our development process? 88 | 89 | #### Integration and Workflow 90 | 91 | * How are our static code analysis tools integrated with our version control systems, IDEs, and CI/CD pipelines? 92 | * Are there any bottlenecks or disruptions caused by static code analysis tools in our current workflow? 93 | 94 | #### Developer Judgment 95 | 96 | * Do our developers feel empowered to override automated checks when necessary? 97 | * How often do we find that flagged issues are false positives, and how do we handle them? 98 | 99 | #### Issue Detection and Resolution 100 | 101 | * Are we addressing the issues identified by static code analysis tools promptly and effectively? 102 | * How frequently do we encounter issues that static code analysis tools miss, and how can we improve our detection methods? 103 | 104 | #### Dependency Analysis 105 | 106 | * How effectively are we using static code analysis tools to assess and improve module cohesion and dependency management? 107 | * Are there areas in our codebase with poor modularity that these tools have helped us identify and improve? 108 | 109 | ## Supporting Capabilities 110 | 111 | ### [Code Maintainability](/capabilities/code-maintainability.md) 112 | 113 | The Perform Static Code Analysis practice robustly supports the Code Maintainability Dora Capability by providing automated tools that enhance code quality, consistency, and readability. 114 | These tools meticulously scan the codebase to identify potential issues such as code smells, security vulnerabilities, and performance bottlenecks early in the development process. 115 | By integrating static code analysis into version control systems, IDEs, and CI/CD pipelines, teams can receive immediate feedback on code changes, ensuring adherence to coding standards and best practices. This proactive approach reduces the cognitive load on developers, allowing them to focus on more complex tasks while maintaining a clean, modular, and easily comprehensible codebase. -------------------------------------------------------------------------------- /practices/plan-capacity.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/plan-capacity.md -------------------------------------------------------------------------------- /practices/prioritize-design-separation.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/prioritize-design-separation.md -------------------------------------------------------------------------------- /practices/provide-dev-coaching.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/provide-dev-coaching.md -------------------------------------------------------------------------------- /practices/pursue-continuous-personal-development.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/pursue-continuous-personal-development.md -------------------------------------------------------------------------------- /practices/reduce-coupling-between-abstractions.md: -------------------------------------------------------------------------------- 1 | # Reduce Coupling Between Abstractions 2 | 3 | Reducing coupling between abstractions means designing software in a way that its parts work independently and don't rely too much on each other. 4 | This involves hiding the complex details inside each part, so changes in one part don't affect others. It encourages creating small, focused modules that talk to each other through simple and clear interfaces. 5 | Reducing coupling ultimately makes the system easier to understand, fix, and expand, resulting in software that's reliable and flexible. 6 | 7 | ## Nuances 8 | 9 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this practice. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the practice with your teams. 10 | 11 | ### Over-Engineering 12 | 13 | While reducing coupling is beneficial, overdoing it can lead to over-engineering. 14 | Creating too many tiny, isolated components can make the system overly complex and difficult to manage. 15 | Developers should strive to simplify designs and only apply abstraction where it adds clear value, avoiding unnecessary layers of indirection. 16 | 17 | ### Misinterpreting Interface Usage 18 | 19 | Interfaces are a powerful tool for reducing coupling, but they can be misused. 20 | A common misconception is that every class needs an interface, leading to interface proliferation without any real benefit. 21 | Interfaces should be used judiciously, primarily where they provide flexibility for future changes or multiple implementations. 22 | 23 | ### Impact on Collaboration and Onboarding 24 | 25 | Reducing coupling can make the system more modular and understandable, but if the abstractions are not well-documented, new team members might struggle to understand the design. 26 | Effective communication and comprehensive documentation are essential. 27 | 28 | ### Radical Implementation vs. Incremental 29 | 30 | Attempting to reduce coupling throughout an existing codebase *all at once* can be overwhelming and risky. 31 | Instead, it's often more practical to implement these changes incrementally. 32 | Start with the most problematic areas, gradually refactoring and decoupling components. This helps manage risk and maintain system stability. 33 | 34 | ### Recognizing Natural Coupling 35 | 36 | Not all coupling is bad; some level of dependency is natural and necessary. 37 | Recognizing and accepting necessary coupling helps avoid futile efforts to decouple what should inherently be connected. 38 | 39 | ### Design for Needed, Not Speculative, Features 40 | 41 | Reducing coupling prepares the codebase for future changes, but it’s important to avoid premature optimization. 42 | The YAGNI (You Aren't Gonna Need It) principle warns against adding complexity for features that might never be needed. 43 | Focus on the current requirements and only introduce abstractions when there's a clear, present need. This will help you avoid speculative design. 44 | 45 | ## Gaining Traction 46 | 47 | The following actions will help your team implement this practice. 48 | 49 | ### [Host a Viewing Party](/practices/host-a-viewing-party.md) 50 | 51 | #### [Boundaries by Gary Bernhardt](https://www.destroyallsoftware.com/talks/boundaries) 52 | 53 | This talk explores the intricate dynamics between code boundaries and system architecture, illustrating how to create clean and maintainable code through effective separation of concerns. In particular, Gary introduces a way to use values as the boundaries between abstractions. 54 | 55 | ### [Start a Book Club](/practices/start-a-book-club.md) 56 | 57 | #### [Clean Architecture by Robert C. Martin](https://www.goodreads.com/book/show/18043011-clean-architecture) 58 | 59 | This book delves into principles and practices that ensure code remains clean, emphasizing the importance of separation of concerns and the decoupling of systems for better manageability. 60 | 61 | #### [Working Effectively with Legacy Code by Michael C. Feathers](https://www.goodreads.com/book/show/44919.Working_Effectively_with_Legacy_Code) 62 | 63 | This book discusses how to find seams, add automated test coverage, and refactor the system to be more simple. 64 | 65 | #### [Refactoring by Martin Fowler](https://www.goodreads.com/en/book/show/44936.Refactoring) 66 | 67 | This is similar to Feathers's book above, but it covers the content from a first-principles standpoint. 68 | 69 | ### [Facilitate a Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 70 | 71 | Below are suggestions for topics and prompts you could explore with your team during a roundtable discussion. 72 | 73 | #### Understanding Dependencies 74 | 75 | * How tightly coupled are our current modules and components? 76 | * What are the most common pain points we encounter due to high coupling in our codebase? 77 | * How often do changes in one part of the system require changes in other parts? 78 | 79 | #### Evaluating Interfaces 80 | 81 | * Are we using interfaces effectively to reduce coupling, or are they adding unnecessary complexity? 82 | * How many interfaces in our codebase have only one implementation? 83 | * Do we have a clear understanding of when and why we should introduce new interfaces? 84 | 85 | #### Knowledge Sharing 86 | 87 | * How effectively do we share knowledge about our design decisions and abstractions within the team? 88 | * Are there ways to improve collaboration and onboarding through better documentation and communication? 89 | 90 | #### Gradual Refactoring 91 | 92 | * Have we identified the most problematic areas of coupling in our codebase? 93 | * What small, incremental changes can we make to start reducing coupling in these areas? 94 | * How do we ensure system stability while refactoring to reduce coupling? 95 | 96 | ### [Do a Spike, or Timeboxed Experiment](/practices/do-a-spike.md) 97 | 98 | * **Refactor**: Set some time aside to refactor a key component or set of components to reduce coupling. Present your findings to the team to see if committing those changes or making additional changes have a good potential return on investment. 99 | * **Audit Your Dependencies**: Use a [dependency analysis tool](https://markgacoka.medium.com/how-to-visualize-your-codebase-7c4c4d948141) to visualize the relationships between modules and components, and to identify highly coupled areas. Discuss why these dependencies exist. 100 | 101 | ## Adjacent Capability 102 | 103 | This practice supports enhanced performance in the following capability. 104 | 105 | ### [Code Maintainability](/capabilities/code-maintainability.md) 106 | 107 | Reducing coupling between abstractions enhances the Code Maintainability capability by creating a modular and flexible codebase. 108 | Independent, well-defined components minimize unintended side effects, making the code easier to understand, modify, and test. 109 | This modularity ensures that changes in one part of the system do not disrupt others, preserving stability and reducing cognitive load on developers. 110 | Clear abstractions and minimal dependencies support better documentation and collaboration, which in turn facilitate efficient onboarding and continuous improvement. 111 | -------------------------------------------------------------------------------- /practices/refactor.md: -------------------------------------------------------------------------------- 1 | # Refactor 2 | 3 | Refactoring is a disciplined technique in software development that involves restructuring existing code without changing its external behavior. 4 | The goal is to improve code readability, reduce complexity, and enhance maintainability. 5 | Refactoring should be done continuously and in small increments to minimize errors and avoid being overwhelmed by large-scale changes or causing conflicts with team members who might be working on the same parts of the code. 6 | Changes should be made through a series of small transformations, applied from the least to the most disruptive. This practice maintains the code's capacity to change and adapt effectively over time. 7 | 8 | ## Nuance 9 | 10 | ### Common Misconceptions 11 | 12 | A common misconception is that refactoring involves adding new features, functionality, or improving the user experience (UX). 13 | In reality, refactoring is solely about improving the structure of existing code without altering its external behavior. 14 | Its focus is on enhancing the code’s readability, maintainability, and performance, not changing it's output. 15 | 16 | ### Automated Testing is Crucial 17 | 18 | Refactoring should be supported by a robust suite of automated tests. 19 | These tests ensure that the changes made during refactoring do not introduce new bugs. 20 | Without adequate testing, it becomes difficult to guarantee that the code’s behavior remains consistent after modifications. 21 | 22 | ### The Risks of Too Much in One Go 23 | 24 | Attempting too much in a single refactor can increase complexity, introduce errors, and slow progress. 25 | It's important to break tasks into smaller, manageable units to maintain clarity, reduce errors, and enhance the codebase incrementally. 26 | Doing too much at once can be overwhelming and create conflicts with other team members. 27 | 28 | ### Prolonged Gaps Between Refactors 29 | 30 | Allowing too much time to elapse between refactoring sessions can lead to numerous challenges. 31 | Code-bases left unattended for extended periods often accumulate technical debt, making subsequent refactors more daunting and time-consuming. 32 | To mitigate this, it's important to adopt a strategy of frequent, incremental refactoring rather than waiting for issues to pile up. 33 | By addressing small improvements continuously, developers can maintain code quality, reduce the accumulation of technical debt, and ensure the codebase remains easy to work with over time. 34 | 35 | ### Refactoring Legacy Code Requires Careful Planning 36 | 37 | Refactoring legacy code requires careful planning due to its complexity and potential lack of tests. 38 | An incremental approach is essential, making small, manageable changes to avoid introducing new errors. 39 | Focus on identifying and isolating specific areas that need improvement, and implement testing around these areas to ensure behavior remains consistent. 40 | Prioritize high-impact areas to maximize the benefits of refactoring efforts. 41 | Gradually integrate changes to avoid overwhelming the system and facilitate smooth transitions, ensuring the codebase improves and remains stable over time. 42 | 43 | ### Monitoring and Roll-back 44 | 45 | Monitoring and roll-back infrastructure is important in managing regressions during refactoring. 46 | Monitoring tools provide real-time feedback on application performance and behavior, enabling quick detection of issues introduced by refactoring. 47 | Combined with a robust roll-back system, teams can revert changes if regressions occur, minimizing disruption to users. 48 | This dual approach ensures the application remains stable and consistent, maintaining the integrity of the codebase while allowing continuous improvement. 49 | 50 | ### Trunk-based Development 51 | 52 | Trunk-based development, with its focus on continuous integration, is highly conducive to continuous refactoring. 53 | By committing changes directly to the mainline, developers can refactor incrementally, ensuring ongoing improvement. 54 | This approach promotes a culture of collaboration, as changes are immediately visible to the entire team. 55 | The fast feedback loop allows for quick identification and resolution of any issues. 56 | 57 | ## How to Improve 58 | 59 | ### [Lead A Demonstration](/practices/lead-a-demonstration.md) 60 | 61 | ### [Lead Workshops](/practices/lead-workshops.md) 62 | 63 | #### Incremental and Frequent Refactoring 64 | 65 | Organize a dedicated part of a sprint to refactoring, where the team focuses on incremental improvements. 66 | Break down the codebase into small, manageable units and assign each unit to team members. 67 | Use automated tests to validate each change. 68 | 69 | #### Refactoring Legacy Code Workshop 70 | 71 | Organize a workshop focused on refactoring a specific section of legacy code. 72 | Start by identifying high-impact areas that require improvement. 73 | Develop a plan to implement small, incremental changes and establish a set of tests to ensure behavior remains consistent. 74 | Execute the refactoring plan, closely monitoring for any issues. 75 | This exercise will highlight the challenges and strategies involved in refactoring legacy code and emphasize the importance of careful planning and incremental progress. 76 | 77 | ## Supporting Capabilities 78 | 79 | ### [Code Maintainability](/capabilities/code-maintainability.md) 80 | 81 | Refactoring, as a practice, significantly supports the Code Maintainability Capability by systematically improving code structure, readability, and quality. Through incremental changes and adherence to coding standards, it addresses complexity and technical debt, ensuring the code remains clean, modular, and comprehensible. By integrating refactoring into regular development cycles, teams establish a foundation of maintainable code, enabling efficient delivery, stability, and ongoing innovation. 82 | 83 | ### [Version Control](/capabilities/version-control.md) 84 | 85 | Version control and refactoring in software development go hand in hand. 86 | Version control tracks code changes over time, facilitating collaboration and reversibility. 87 | Refactoring improves code structure without altering behavior. 88 | Together, they enable teams to systematically enhance code quality, with version control tracking and integrating improvements with confidence. 89 | 90 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 91 | 92 | #### Understanding Refactoring 93 | 94 | * How frequently does your team engage in refactoring activities? 95 | * Are team members clear on the distinction between refactoring and adding new features/functionality? 96 | * How do team members understand the importance of refactoring in maintaining code quality? 97 | 98 | #### Testing and Refactoring 99 | 100 | * Have there been instances where inadequate testing led to unexpected behavior post-refactoring? 101 | * Are there any gaps in testing coverage that could hinder refactoring initiatives? 102 | 103 | #### Managing Refactoring Tasks 104 | 105 | * How does your team manage the size and scope of refactoring tasks? 106 | * Have there been instances of attempting too much in a single refactor, resulting in complications? 107 | * How do you prioritize refactoring tasks to ensure the most critical areas are addressed first? 108 | 109 | #### Monitoring and Roll-back 110 | 111 | * How quickly can your team detect issues introduced by refactoring through your monitoring tools? 112 | * What improvements can be made to your monitoring and roll-back systems to better support refactoring efforts? 113 | * How do you track and analyze data from monitoring tools to prevent similar issues in future refactoring efforts? 114 | 115 | #### Frequency and Timing 116 | 117 | * What is the typical interval between refactoring sessions in your organization? 118 | * Have you observed any challenges stemming from prolonged gaps between refactors? 119 | * How can you encourage a culture of frequent, incremental refactoring to mitigate technical debt accumulation? 120 | 121 | #### Legacy Code Refactoring 122 | 123 | * How do you approach refactoring efforts in legacy codebases? 124 | * What strategies do you employ to ensure careful planning and incremental changes in legacy systems? 125 | * Are there specific techniques or tools your team utilizes to identify high-impact areas for refactoring in legacy code? 126 | -------------------------------------------------------------------------------- /practices/reuse-code-mindfully.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/reuse-code-mindfully.md -------------------------------------------------------------------------------- /practices/run-automated-tests-in-ci-pipeline.md: -------------------------------------------------------------------------------- 1 | # Run Automated Tests In An Integration/Deployment Pipeline 2 | 3 | ## Key Points 4 | 5 | * Benefits of Running Tests in CI Pipeline 6 | * Early detection of defects 7 | * Reduced integration problems 8 | * Improved code quality and reliability 9 | * Faster feedback loops 10 | * Enhanced collaboration among team members 11 | * Types of Tests **instead of going into detail about each one, link to the appropriate practice and talk about when in the pipeline these types of tests should be run. Ex, you may not want to run all of these tests for every single type of build. 12 | * [Unit tests](/practices/implement-unit-tests.md) 13 | * [Integration tests](/practices/implement-integration-tests.md) 14 | * [End-to-end tests](/practices/implement-end-to-end-tests.md) 15 | * [Performance tests](/practices/implement-performance-tests.md) 16 | * Best Practices for Running Tests in CI Pipeline 17 | * Prioritize fast-running tests 18 | * Parallelize test execution 19 | * Maintain a clean test environment 20 | * Containerization to ensure correct dependencies 21 | * Database Management 22 | * Mock external dependencies 23 | * Ensure test data consistency 24 | * Regularly review and update tests 25 | * Challenges and Solutions 26 | * Flaky tests and how to handle them 27 | * Managing long-running tests 28 | * Ensuring test coverage and avoiding test duplication 29 | * Scaling tests with the project growth 30 | 31 | 32 | -------------------------------------------------------------------------------- /practices/run-daily-standups.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/run-daily-standups.md -------------------------------------------------------------------------------- /practices/scan-vulnerabilities.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/scan-vulnerabilities.md -------------------------------------------------------------------------------- /practices/schedule-regular-documentation-audits.md: -------------------------------------------------------------------------------- 1 | # Schedule Regular Documentation Audits 2 | 3 | Under Construction 4 | 5 | -------------------------------------------------------------------------------- /practices/segregate-sensitive-and-insensitive-data.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/segregate-sensitive-and-insensitive-data.md -------------------------------------------------------------------------------- /practices/separate-config-from-code.md: -------------------------------------------------------------------------------- 1 | # Separate Config from Code 2 | 3 | Separating configuration from code is crucial for maintaining secure and flexible systems. Extracting configurable values makes systems adaptable, enabling easy adjustments without modifying the codebase. Sensitive information like passwords or API keys should be isolated to limit access to highly trusted team members. 4 | 5 | ## Nuance 6 | 7 | ### Configuration Storage 8 | Store sensitive configurations in secure, encrypted repositories or vaults, enforce strict access controls and conduct regular security audits. When considering storing options, favor ones that support versioning configuration changes so you retain the ability to restore to previous "known to work" configuration values. 9 | 10 | ### Deployment Complexity 11 | Adopting external configuration management introduces complexity in selecting and implementing the right tools and processes. Teams must navigate through options, considering factors such as integration, security, and scalability, to find a balance between the benefits of externalized configurations and the added complexity of managing them effectively. 12 | 13 | ### Environment Parity 14 | Use the same environment variables, configuration files, and services to ensure uniformity from development through production, thereby reducing deployment errors and operational discrepancies. Obviously the configured values will differ from environment to environment. The key consideration here is to maintain an identical "configuration schema" so to speak. 15 | 16 | ### Allow Local Overrides 17 | Allow local overrides of configuration values and provide developers with a blueprint to create their own local configuration files. For instance, an .env.example file might include placeholders for environment variables that need to be set but without providing any real keys or passwords. This keeps sensitive data out of application version control, without constraining developer productivity. 18 | 19 | ## How to Improve 20 | 21 | ### [Lead Workshops](/practices/lead-workshops.md) 22 | 23 | #### Review and Identify Configuration in Version Control 24 | 25 | Audit your current repositories to identify instances of configuration or sensitive data stored within version control. Document the types of data found and evaluate the potential risks associated with their exposure. 26 | 27 | #### Configuration Change Management Simulation 28 | 29 | Simulate a process for managing changes to configuration data that involves multiple environments. Include steps for reviewing, approving, and applying configuration changes. Assess the impact on deployment times, security, and team collaboration. 30 | 31 | ### [Do A Spike](/practices/do-a-spike.md) 32 | 33 | #### Implement Environment-Specific Configuration Files 34 | 35 | Create separate configuration files for different environments (development, staging, production). Ensure the schema of each file should be the same. Experiment with mechanisms to securely inject these configurations during deployment or runtime. 36 | 37 | #### Secure Configuration Storage Evaluation 38 | 39 | Explore and integrate a secure configuration management solution, such as HashiCorp Vault or AWS Secrets Manager. Evaluate the effectiveness of this solution in improving security and flexibility compared to storing sensitive data in version control. 40 | 41 | ### [Start A Book Club](/practices/start-a-book-club.md) 42 | 43 | #### [The Twelve-Factor App - Config](https://12factor.net/config) 44 | This section of the Twelve-Factor App methodology emphasizes the importance of separating configuration from code. It advocates for storing config in the environment to improve security and adaptability across various deployment environments, offering foundational insights for efficient configuration management. 45 | 46 | #### [97 Things Every Programmer Should Know - Store Configurations in the Environment](https://github.com/97-things/97-things-every-programmer-should-know/tree/master/en/thing_61) 47 | A concise guide that underscores the significance of externalizing configuration, highlighting how this practice enhances application security, simplifies deployment, and supports scalability. It provides actionable advice for developers to implement this best practice effectively. 48 | 49 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 50 | 51 | #### Protecting Sensitive Configuration 52 | 53 | Have we implemented robust security measures for our configuration data? 54 | Are encryption and access controls in place to prevent unauthorized access and ensure compliance with security regulations? 55 | 56 | #### Manage Configuration Across Environments 57 | 58 | Do we have a consistent approach for managing configurations across different environments? 59 | How do we ensure that our deployment processes are seamless and that configurations do not lead to errors or discrepancies in various environments? 60 | 61 | #### Handle Configuration Changes and Versioning 62 | 63 | What processes do we have in place for managing changes to configuration? 64 | How do we track and version configuration changes to ensure that our application remains stable with each update? 65 | 66 | #### Balancing Flexibility with Complexity? 67 | 68 | In our efforts to externalize configuration, have we introduced unnecessary complexity into our deployment and operational processes? 69 | How do we strike a balance between the flexibility of externalized configurations and the simplicity of our overall system architecture? 70 | 71 | ## Supporting Capabilities 72 | 73 | ## [Version Control](/capabilities/version-control.md) 74 | By advocating for the exclusion of configuration and sensitive data from version control, this practice improves the Version Control Capability, by defining the exceptions where storing information in application code source control is not desirable. 75 | 76 | ## [Continuous Integration](https://dora.dev/devops-capabilities/technical/continuous-integration) 77 | Separate Config from Code facilitates more efficient and secure continuous integration (CI) processes. It allows for seamless integration of code changes by ensuring that environment-specific configurations do not interfere with the build process, thereby enhancing the reliability and speed of CI cycles. 78 | 79 | ## [Deployment Automation](https://dora.dev/devops-capabilities/technical/deployment-automation) 80 | This practice necessitates sophisticated deployment automation that can manage and inject external configurations at deployment time. By separating configuration from the codebase, deployment automation becomes a critical capability for applying different configurations across environments automatically, thus supporting scalable and repeatable deployments. 81 | 82 | ## [Monitoring and Observability](https://dora.dev/devops-capabilities/technical/monitoring-and-observability) 83 | While not directly related to monitoring and observability, this practice indirectly supports these capabilities by promoting cleaner and more manageable codebases. By keeping configuration data separate, it simplifies the application's operational landscape, making it easier to monitor and observe its behavior across different environments. 84 | -------------------------------------------------------------------------------- /practices/separate-credentials-from-code.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/separate-credentials-from-code.md -------------------------------------------------------------------------------- /practices/share-knowledge.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/share-knowledge.md -------------------------------------------------------------------------------- /practices/test-for-fault-tolerance.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/test-for-fault-tolerance.md -------------------------------------------------------------------------------- /practices/understand-your-system-requirements.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/understand-your-system-requirements.md -------------------------------------------------------------------------------- /practices/use-documentation-auto-generation-tooling.md: -------------------------------------------------------------------------------- 1 | # Use Documentation Auto-Generation Tooling 2 | 3 | Under Construction 4 | 5 | -------------------------------------------------------------------------------- /practices/use-spin-to-unearth-problems-and-solutions.md: -------------------------------------------------------------------------------- 1 | # Use SPIN To Unearth Problems and Solutions 2 | 3 | SPIN Selling is a practice that focuses on understanding a person's needs through four types of questions: Situation, Problem, Implication, and Need-Payoff. By identifying and addressing the someone's pain points and the consequences of inaction, the question asker guides the person to recognize the value of the solution being offered. Below is a more detailed breakdown of each category of question: 4 | 5 | * Situation: questions that collect facts, information, and background data about the existing situation. 6 | * Problem: questions that probe for problems, difficulties, or dissatisfaction. 7 | * Implication: questions that develop the seriousness of an implied need and increase the size of the problem. 8 | * Need-Payoff: questions that build up the value or usefulness of the solution. 9 | 10 | Software professionals can use SPIN to generate buy-in on a new idea. This occurs when there is a high level of agreement in the problems and implications of those problems existing. 11 | 12 | Let's say there is a team struggling to get timely code review from a centralized reviewer. Someone could use the SPIN framework with the centralized reviewer to establish the importance of solving this problem. See the example conversation below: 13 | 14 | ```text 15 | Situation Q: How many pull requests do you review each day? 16 | A: It depends, but usually somewhere between 6 and 10. 17 | 18 | Problem Q: Is it hard to juggle that many pull requests every day? How much of your feedback is repetitive? 19 | A: Yeah, the context switching is difficult and a large percentage of the feedback I provide winds up covering a similar selection of topics. 20 | 21 | Implication Q: What are you unable to do now because of the effort required to stay on top of these reviews? 22 | A: I'm so behind on doing actual programming. Right now, it's taking me twice as long to finish my work. 23 | 24 | Need Payoff Q: If there was a way to automatically post relevant feedback to a pull request from a knowledge base you maintain using AI, how much time would that save you? 25 | A: If it worked well, that would probably cut my review time in half. 26 | ``` 27 | 28 | The above example shows how following SPIN can set the stage for the need before offering a solution. It's common for people to dive straight into the solution, which can lessen the chances of generating buy-in. 29 | 30 | ## How to Improve 31 | 32 | ### [Start A Book Club](/practices/start-a-book-club.md) 33 | 34 | - [SPIN Selling](https://www.amazon.com/SPIN-Selling-Neil-Rackham/dp/0070511136) 35 | 36 | This book emphasizes understanding the implications of problems and demonstrating the value of solutions, leading to more buy in. Its research-based approach provides practical strategies to adapt to complex environments, making it a valuable resource for those looking to enhance their techniques and achieve better results. Originally intended for selling products and services, we can learn a lot from SPIN on how to sell ideas. 37 | 38 | ## Supporting Capabilities 39 | 40 | ### [Learning Culture](/capabilities/learning-culture.md) 41 | 42 | Use SPIN To Unearth Problems and Solutions promotes learning culture because it provides a means to discover areas of individual, team, and organizational improvement. 43 | -------------------------------------------------------------------------------- /practices/use-templates-for-new-projects.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/use-templates-for-new-projects.md -------------------------------------------------------------------------------- /practices/use-test-doubles.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/use-test-doubles.md -------------------------------------------------------------------------------- /practices/version-dependencies.md: -------------------------------------------------------------------------------- 1 | # Version Dependencies 2 | 3 | The practice of Version Dependencies involves managing application dependencies by versioning and referencing them through specific versions. This approach ensures consistency, reliability, and traceability in software development projects by maintaining a clear record of all dependency versions used within an application. 4 | 5 | ## Nuance 6 | 7 | ### Regular Updates Required: 8 | Even with strict version control, dependencies must be regularly updated to address security vulnerabilities, bugs, and performance issues. This requires a balance between maintaining stability and incorporating necessary changes. 9 | 10 | ### Avoiding Dependency Hell: 11 | Managing a complex web of dependencies can lead to "dependency hell," where updating one dependency necessitates cascading updates, potentially causing compatibility issues across the project. 12 | 13 | ### Locking Dependencies and Automated tools 14 | To ensure stability and predictability in software projects, we recommend to lock dependencies to specific versions. 15 | However, this approach requires a strategy to stay updated with the latest fixes and improvements and security patches. 16 | Automated tools like Dependabot facilitate keep the dependencies updated across projects. 17 | They monitor dependencies for new versions, and can automatically create pull requests to update to these newer versions. 18 | 19 | ## How to Improve 20 | 21 | ### [Lead Workshops](/practices/lead-workshops.md) 22 | 23 | #### Audit Your Current Dependency Management 24 | 25 | To assess the current state of dependency management within your team or organization. A comprehensive report detailing which dependencies are managed effectively and which are not, identifying potential areas for improvement. 26 | 27 | #### Dependency Update Policy Review 28 | 29 | To review and potentially revise your current policies on updating dependencies. A refined update policy that balances the need for updates with the desire for stability, possibly leading to more efficient and secure project development. 30 | 31 | #### Simulate a Dependency Hell Scenario 32 | 33 | To simulate a "dependency hell" scenario to understand its impact and identify strategies for mitigation. Practical experience in managing complex dependency chains, leading to improved strategies for avoiding or dealing with dependency hell in real projects. 34 | 35 | ### [Do A Spike](/practices/do-a-spike.md) 36 | 37 | #### Implement Semantic Versioning on a Small Scale 38 | 39 | To experiment with semantic versioning by applying it to a small, manageable portion of your project. Insights into how semantic versioning affects project stability and the process of updating dependencies, helping you decide if a broader implementation is beneficial. 40 | 41 | #### Implement Automatic Dependency Updates tool 42 | 43 | Lock mayor dependencies in your project and configure Dependabot or a similar tool, to generate PRs when new version of dependencies are published. Understand how automatic dependency update tools impact your work flow and the overall stability of the project. 44 | 45 | ### [Start A Book Club](/practices/start-a-book-club.md) 46 | 47 | #### [Dependencies Belong in Version Control](https://www.forrestthewoods.com/blog/dependencies-belong-in-version-control/) 48 | 49 | This article explores the importance of including dependencies within version control systems to ensure consistency, reliability, and traceability in software development projects. It discusses the benefits and methodologies of version controlling dependencies, offering insights into best practices for managing software dependencies effectively. 50 | 51 | ### [Host A Roundtable Discussion](/practices/host-a-roundtable-discussion.md) 52 | 53 | #### How Effective Is Your Dependency Management? 54 | 55 | * How effectively is your team currently managing dependency versions, and could a more systematic approach to version control improve project consistency and reliability? 56 | * Have you encountered issues with "dependency hell," and what strategies could you implement to mitigate these challenges while maintaining strict version control? 57 | * Is your current policy for updating dependencies proactive or reactive? 58 | How often do you review dependency versions for potential updates, and could this process be optimized? 59 | 60 | #### Are you using using tools to automate dependency updates? 61 | 62 | * Are automatic dependency update tools suitable for your project? 63 | * Could you benefit from using tools like Dependendabot, Renovate or Snyk to have dependency update pull requests generated automatically? 64 | 65 | ## Supporting Capabilities 66 | 67 | ### [Continuous Integration](https://dora.dev/devops-capabilities/technical/continuous-integration/) 68 | **Relationship:** Enables 69 | Version Controlled Dependencies ensure that all team members work with the same versions of dependencies, reducing integration conflicts and enabling more efficient continuous integration processes. 70 | 71 | ### [Database Change Management](https://dora.dev/devops-capabilities/technical/database-change-management/) 72 | **Relationship:** Enables 73 | By versioning database schema changes alongside code dependencies, teams can apply version control practices to database changes as well, facilitating smoother migrations and deployments. 74 | 75 | ### [Deployment Automation](https://dora.dev/devops-capabilities/technical/deployment-automation/) 76 | **Relationship:** Enables 77 | Having dependencies version-controlled allows for more predictable deployments, as the exact versions used in development are carried through to production environments, supporting automated deployment pipelines. 78 | 79 | ### [Version Control](/capabilities/version-control.md) 80 | **Relationship:** Requires 81 | The practice of Version Dependencies inherently requires a robust version control system to manage the dependencies' versions alongside the application's source code. 82 | 83 | ### [Documentation Quality](https://dora.dev/devops-capabilities/process/documentation-quality/) 84 | **Relationship:** Improves 85 | Proper versioning of dependencies can improve documentation quality by providing clear references to the specific versions of external libraries or frameworks used, making the documentation more accurate and useful. 86 | 87 | ### [Working in Small Batches](https://dora.dev/devops-capabilities/process/working-in-small-batches/) 88 | **Relationship:** Improves 89 | Version Controlled Dependencies support working in small batches by making it easier to manage and integrate small, incremental changes to dependencies, aligning with best practices for agile and DevOps methodologies. 90 | -------------------------------------------------------------------------------- /practices/write-characterization-testing-for-legacy-code.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/write-characterization-testing-for-legacy-code.md -------------------------------------------------------------------------------- /practices/write-code-in-functional-programming-style.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/write-code-in-functional-programming-style.md -------------------------------------------------------------------------------- /practices/write-code-with-single-responsibility.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /practices/write-ephemeral-model-based-tests.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/write-ephemeral-model-based-tests.md -------------------------------------------------------------------------------- /practices/write-invest-back-log-items.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/write-invest-back-log-items.md -------------------------------------------------------------------------------- /practices/write-performance-tests.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pragmint/open-practices/c4b865e555bdb8b0cb3bbfa167d192324c4d169c/practices/write-performance-tests.md -------------------------------------------------------------------------------- /resources/apprenticeship-patterns.md: -------------------------------------------------------------------------------- 1 | # *Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman* by Dave Hoover and Adewale Oshineye 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Apprenticeship-Patterns-Guidance-Aspiring-Craftsman/dp/0596518382 6 | 7 | This book offers practical advice and proven techniques to help software developers grow and excel in their careers. The authors introduce a collection of patterns that guide readers in acquiring new skills, overcoming challenges, and building a strong foundation in software craftsmanship. This is a valuable resource for aspiring and experienced developers who want to continuously improve their craft, stay motivated, and navigate their career paths effectively. By following the actionable insights in this book, readers can cultivate a mindset of lifelong learning and become better software professionals. -------------------------------------------------------------------------------- /resources/boundaries.md: -------------------------------------------------------------------------------- 1 | # Boundaries by Gary Bernhardt 2 | 3 | Resource type: Video 4 | 5 | https://www.destroyallsoftware.com/talks/boundaries 6 | 7 | This presentation delves into the concept of using simple values rather than complex objects as the boundaries between components and subsystems in software development. It covers various topics such as functional programming, the relationship between mutability and object-oriented programming (OO), isolated unit testing with and without test doubles, and concurrency. Understanding and implementing these concepts can be immensely beneficial in managing dependencies with third parties. 8 | 9 | If you're watching the video with your team(s), you may want to pause and ponder at the following points: 10 | 11 | * 4:04 12 | * do we do this? 13 | * Benefits / Downside? 14 | * Do we want those benefits? 15 | * how valuable are they? 16 | * 8:54 17 | * No Dependencies; Defined by Inputs & Output 18 | * "Natural Isolation" 19 | * Is this cheating? Is this just a mock by another name? 20 | * "Good" Abstraction? 21 | * 9:09 22 | * Email is separated (hidden abstraction?) 23 | * Does the code need to know it's working with a DB or with an ORM? 24 | * with a BE or with Memory? 25 | * Can we find the dependencies in this code that it's behavior doesn't really need to care about? 26 | * 9:20 27 | * Value is the Boundary (value = data; even OO Code is separate from it) 28 | * Where are potential value-boundaries in this file? 29 | * 10:20 30 | * Interesting but not essential 31 | * Also subjective, not everyone agrees 32 | * 13:58 33 | * Core & Shell 34 | * Shell has State 35 | * Shell knows about the ugly outside world 36 | * Where is the shell in our file? 37 | * Where can we extract cores from the shell? 38 | * Core makes Decisions; Shell knows Dependencies 39 | * 20:00 40 | * How does this apply to microservices? 41 | * Kafka? 42 | * Grpc & protobuf? 43 | * How should we abstract our microservices? 44 | -------------------------------------------------------------------------------- /resources/clean-architecture.md: -------------------------------------------------------------------------------- 1 | # *Clean Architecture* by Robert C. Martin 2 | 3 | Resource type: Book 4 | 5 | https://www.goodreads.com/book/show/18043011-clean-architecture 6 | 7 | This book focuses on foundational principles of software architecture, emphasizing the importance of separation of concerns, maintainability, and testability. The author covers key concepts such as independence of frameworks, boundary definitions, and the dependency rule, while providing practical examples for structuring scalable software systems. For developers and architects looking to design systems that are both adaptable and easy to maintain, this resource offers valuable insights and a solid framework for creating clean, robust architectures that align with business goals. 8 | -------------------------------------------------------------------------------- /resources/crucial-conversations.md: -------------------------------------------------------------------------------- 1 | # *Crucial Conversations: Tools for Talking When Stakes are High* by Joseph Grenny 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Crucial-Conversations-Tools-Talking-Stakes/dp/1260474186 6 | 7 | This book focuses on mastering high-stakes conversations. It offers strategies for effectively handling disagreements, difficult discussions, and emotional situations. Key concepts include dialogue skills, how to stay focused under pressure, and how to achieve positive outcomes during tough conversations. The tools it provides for improving communication, building trust, and resolving conflicts make it an essential resource for anyone seeking to enhance their interpersonal skills and navigate critical conversations with confidence. 8 | -------------------------------------------------------------------------------- /resources/debugging-with-the-scientific-method.md: -------------------------------------------------------------------------------- 1 | # Debugging with the Scientific Method by Stuart Halloway 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=FihU5JxmnBg 6 | 7 | This video provides insights and techniques for using the scientific method to understand and debug software issues. -------------------------------------------------------------------------------- /resources/doubleloop-learning-review.md: -------------------------------------------------------------------------------- 1 | # DoubleLoop Learning Review (Episode 1) with John Cutler and Dan Schmidt 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=9BBTwxLFfCc 6 | 7 | In this video, Dan Schmidt recounts his experience building DoubeLoop, a strategy development platform that uses visual strategy maps to create internal alignment and maximize impact. This is a valuable resource for teams that want to improve problem-solving and decision-making by challenging underlying assumptions. Viewers will gain practical insights into fostering a culture of continuous improvement where teams approach challenges with a reflective and adaptive mindset. 8 | -------------------------------------------------------------------------------- /resources/fifty-quick-ideas-to-improve-your-user-stories.md: -------------------------------------------------------------------------------- 1 | # *Fifty Quick Ideas to Improve Your User Stories* by Gojko Adzic, David Evans, and Nikola Korac 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Fifty-Quick-Ideas-Improve-Stories/dp/0993088104 6 | 7 | This concise book offers practical tips for enhancing user stories, such as writing clear acceptance criteria, creating shared understanding, and avoiding common pitfalls in user story creation. The authors cover techniques that focus on clarity, collaboration, and effective communication. This is a valuable resource for product managers, agile teams, and developers looking to improve their user stories for better collaboration and more successful project outcomes. -------------------------------------------------------------------------------- /resources/flow-state.md: -------------------------------------------------------------------------------- 1 | # What is a flow state and what are its benefits? 2 | 3 | Resource type: Article 4 | 5 | https://www.headspace.com/articles/flow-state 6 | 7 | From the team behind the Headspace mindfulness and meditation app, this article explores the psychological concept of *flow*, a state of deep focus and immersion in an activity. It explains how flow enhances productivity, creativity, and overall well-being by reducing distractions and increasing enjoyment in tasks. By helping readers understand how to enter and sustain a flow state, this resource is valuable for individuals looking to optimize their focus, boost efficiency, and experience greater satisfaction in daily activities. 8 | -------------------------------------------------------------------------------- /resources/hacking-challenge-at-defcon.md: -------------------------------------------------------------------------------- 1 | # Hacking challenge at DEFCON 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=fHhNWAKw0bY 6 | 7 | This video provides insights into real-world social engineering scenarios, helping security professionals and enthusiasts, as well as ethical hackers, refine their techniques. -------------------------------------------------------------------------------- /resources/how-to-speak.md: -------------------------------------------------------------------------------- 1 | # How to Speak 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=Unzc731iCUY 6 | 7 | The heuristic rules presented in this video by the late MIT Professor Patrick Henry Winston aim to improve your speaking ability in critical situations. -------------------------------------------------------------------------------- /resources/http1-vs-http2-vs-http3.md: -------------------------------------------------------------------------------- 1 | # HTTP 1 Vs HTTP 2 Vs HTTP 3! 2 | 3 | Resource type: Video 4 | 5 | http://youtube.com/watch?v=UMwQjFzTQXw 6 | 7 | This video from ByteByteGo eloquently explains the key differences between the three versions of the Hypertext Transfer Protocol (HTTP), focusing on their performance, efficiency, and security improvements. This resource is valuable for developers, network engineers, and tech enthusiasts who want to grasp how these protocols impact web performance and user experience. -------------------------------------------------------------------------------- /resources/is-domain-driven-design-overrated.md: -------------------------------------------------------------------------------- 1 | # Is Domain-Driven Design Overrated? by Stefan Tilkov 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=ZZp9RQEGeqQ 6 | 7 | This talk by Stefan Tilkov explores the contributions and misconceptions of domain-driven design (DDD). It offers practical guidelines for using DDD effectively and approaching software design trends with a balanced perspective. -------------------------------------------------------------------------------- /resources/learning-domain-driven-design.md: -------------------------------------------------------------------------------- 1 | # *Learning Domain-Driven Design* by Vlad Khononov 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Learning-Domain-Driven-Design-Aligning-Architecture/dp/1098100131 6 | 7 | This book introduces key concepts of domain-driven design (DDD), such as bounded contexts, aggregates, and entities, while providing practical examples of ways to apply these concepts in real-world software projects. Designed for developers and architects looking to implement DDD principles effectively, this resource gives readers a clear, structured approach to building complex systems that align software design with business needs. This is an invaluable tool for those seeking to improve collaboration between technical and business teams when creating more maintainable and scalable systems. -------------------------------------------------------------------------------- /resources/maker-time-vs-manager-time.md: -------------------------------------------------------------------------------- 1 | # Maker Time vs Manager Time 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=GIRkQQHzsxI 6 | 7 | This video by Alex Hormozi provides insights into how to better invest your time to maximize productivity. 8 | -------------------------------------------------------------------------------- /resources/owasp-risk-rating-methodology.md: -------------------------------------------------------------------------------- 1 | # OWASP Risk Rating Methodology by Jeff Williams 2 | 3 | Resource type: Article 4 | 5 | https://owasp.org/www-community/OWASP_Risk_Rating_Methodology 6 | 7 | This article provides a structured approach to assessing security risks in applications. It outlines a step-by-step process to evaluate threats based on likelihood and impact, helping organizations effectively prioritize vulnerabilities. This resource is valuable for security professionals and developers seeking a standardized method for assessing and mitigating risks. By using this methodology, readers can make informed decisions on risk management and improve the security posture of their applications. -------------------------------------------------------------------------------- /resources/radical-candor.md: -------------------------------------------------------------------------------- 1 | # *Radical Candor* by Kim Scott 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Radical-Candor-Revised-Kick-Ass-Humanity/dp/1250235375 6 | 7 | This best-selling book emphasizes the importance of caring personally while challenging directly in order to build strong relationships and improve team performance. Packed with practical advice for leaders, this book discusses key concepts such as providing honest feedback, fostering a culture of trust, and developing a leadership style that balances compassion and directness. This resource will help readers create open, effective communication in their teams, resulting in better collaboration and growth. It is a valuable tool for anyone looking to build trust and deliver feedback in a way that motivates and inspires. -------------------------------------------------------------------------------- /resources/stride-threat-modeling.md: -------------------------------------------------------------------------------- 1 | # STRIDE Threat Modeling by Nick Kirtley 2 | 3 | Resource type: Article 4 | 5 | https://threat-modeling.com/stride-threat-modeling/ 6 | 7 | This article offers a comprehensive guide to identifying and mitigating security threats in software systems using the STRIDE methodology: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. The author aims to help readers better understand two key things: how to do a proactive threat analysis and how to design more secure systems. This resource is invaluable for security professionals, developers, and teams looking to improve their threat modeling and security posture. -------------------------------------------------------------------------------- /resources/talk-less-listen-more.md: -------------------------------------------------------------------------------- 1 | # Talk less, listen more: 6 reasons it pays to learn the art by Maggie Wooll 2 | 3 | Resource type: Article 4 | 5 | https://www.betterup.com/blog/talk-less-listen-more 6 | 7 | This article by researcher and author Maggie Wooll highlights the importance of active listening in personal and professional relationships. It discusses six key benefits of active listening, such as building trust, improving understanding, and fostering better communication. This resource is valuable for individuals looking to enhance their communication skills and develop stronger connections with others. By applying the insights from this article, readers can become more empathetic, collaborative, and effective in their interactions. -------------------------------------------------------------------------------- /resources/the-clean-coder.md: -------------------------------------------------------------------------------- 1 | # *The Clean Coder: A Code of Conduct for Professional Programmers* by Robert C. Martin 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Clean-Coder-Conduct-Professional-Programmers/dp/0137081073 6 | 7 | This book provides practical advice on the ethics, discipline, and mindset required to excel as a professional software developer. It covers essential topics such as time management, handling pressure, and taking responsibility for delivering high-quality code. This resource is valuable for both aspiring and experienced developers who want to improve their professionalism, communication, and decision-making skills in the workplace. By following the principles outlined in this book, readers can cultivate a strong work ethic and become more reliable and effective software professionals. -------------------------------------------------------------------------------- /resources/the-five-dysfunctions-of-a-team.md: -------------------------------------------------------------------------------- 1 | # *The Five Dysfunctions of a Team: A Leadership Fable* by Patrick M. Lencioni 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Five-Dysfunctions-Team-Leadership-Fable/dp/8126522747 6 | 7 | This book explores the common pitfalls that hinder team effectiveness, including lack of trust, fear of conflict, lack of commitment, avoidance of accountability, and inattention to results. The author provides readers with a clear framework for addressing and overcoming these dysfunctions, ultimately fostering stronger, more cohesive teams. The material is structured around a leadership fable, making its lessons engaging and practical. This resource is ideal for leaders looking to improve team dynamics, enhance collaboration, and achieve better results. 8 | -------------------------------------------------------------------------------- /resources/the-lean-startup.md: -------------------------------------------------------------------------------- 1 | # *The Lean Startup* by Eric Ries 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898 6 | 7 | This book presents key principles in lean manufacturing and agile development to help startups innovate, scale, and succeed more efficiently. The author introduces core concepts in startup development like validated learning, minimum viable products (MVP), and the Build-Measure-Learn feedback loop, which focus on rapid iteration and early testing to best meet customer needs and reduce uncertainty. This valuable guide will help readers navigate the challenges of launching and growing a business. 8 | -------------------------------------------------------------------------------- /resources/the-one-minute-manager.md: -------------------------------------------------------------------------------- 1 | # *The One Minute Manager* by Ken Blanchard and Spencer Johnson 2 | 3 | Resource type: Book 4 | 5 | https://www.amazon.com/The-One-Minute-Manager/dp/0688014291 6 | 7 | This concise book outlines three core management techniques: One Minute Goals, One Minute Praisings, and One Minute Reprimands. The author offers valuable insights for leaders seeking to improve communication, motivation, and performance within their teams. This resource is ideal for busy managers looking for simple yet effective strategies to boost productivity and foster positive workplace relationships. 8 | -------------------------------------------------------------------------------- /resources/the-power-of-vulnerability.md: -------------------------------------------------------------------------------- 1 | # *The Power of Vulnerability* by Brené Brown 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=iCvmsMzlF7o 6 | 7 | In this popular TED Talk, Brené Brown explores the importance of vulnerability in building connection, courage, and empathy. Brown discusses how embracing vulnerability can lead to stronger relationships and personal growth. This resource is valuable for viewers seeking to improve their emotional intelligence and create more authentic connections in both their personal and professional lives. This talk also offers insights into how to overcome shame and fear, foster resilience, and lead with authenticity. 8 | -------------------------------------------------------------------------------- /resources/the-reasonable-expectations-of-your-new-cto.md: -------------------------------------------------------------------------------- 1 | # The Reasonable Expectations of Your New CTO 2 | 3 | Resource type: Video 4 | 5 | https://vimeo.com/channels/889130/84676528 6 | 7 | In this talk, Robert C. Martin (aka "Uncle Bob") asks viewers to imagine that he is their new CTO. He describes his expectations of his development team, underscoring the importance of professionalism in the face of change. -------------------------------------------------------------------------------- /resources/what-is-dns.md: -------------------------------------------------------------------------------- 1 | # What is DNS? 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=NiQTs9DbtW4 6 | 7 | This video explains the Domain Name System (DNS), which acts as the internet’s phonebook by translating domain names into IP addresses. It covers how DNS works, its key components, and why it's essential for web browsing and online communication. This resource is valuable for beginners, IT professionals, and anyone interested in understanding how websites are accessed and how internet performance and security may be impacted. Viewers can gain a clear understanding of DNS functionality, troubleshooting techniques, and best practices for maintaining a reliable and secure internet connection. 8 | -------------------------------------------------------------------------------- /resources/what-is-your-working-genius.md: -------------------------------------------------------------------------------- 1 | # What is Your Working Genius? 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=L90qkyBfYFI 6 | 7 | In this video, Patrick Lencioni explores the concept of “Working Genius,” a framework that identifies six types of work-related strengths and how they contribute to team success. Lencioni explains how understanding these strengths can help individuals and teams operate more efficiently and reduce frustration. This resource is valuable for leaders, managers, and team members who want to improve collaboration, delegate tasks effectively, and enhance overall productivity. Viewers can learn how to leverage their unique talents and create a more cohesive, high-performing team environment. -------------------------------------------------------------------------------- /resources/winnable-and-unwinnable-games.md: -------------------------------------------------------------------------------- 1 | # Winnable and Unwinnable Games (Part 1) by John Cutler 2 | 3 | Resource type: Article 4 | 5 | https://cutlefish.substack.com/p/tbm-229-winnable-and-unwinnable-games 6 | 7 | This article explores how to recognize whether a situation or challenge in work or life is “winnable” or “unwinnable.” It discusses how understanding this distinction can help individuals and teams focus their efforts on achievable goals, while avoiding frustration in impossible situations. This resource is valuable for professionals, leaders, and teams seeking to make smarter decisions and allocate their energy more effectively. By applying the insights from this article, readers can improve their problem-solving and time-management strategies, and focus on meaningful, attainable objectives. -------------------------------------------------------------------------------- /resources/zebras-all-the-way-down.md: -------------------------------------------------------------------------------- 1 | # Zebras All the Way Down by Bryan Cantrill 2 | 3 | Resource type: Video 4 | 5 | https://www.youtube.com/watch?v=fE2KDzZaxvE 6 | 7 | In this video, Bryan Cantrill explores the complexities of building and maintaining stateful data paths, reflecting on two decades of challenges in storage networking and cloud services. -------------------------------------------------------------------------------- /templates/new-practice.md: -------------------------------------------------------------------------------- 1 | # `Title of Practice (note: this should start with a verb to ensure it's actionable)` 2 | 3 | 4 | ``` 5 | High-level, concise summary of practice. (~ 2-3 sentences) 6 | 7 | Keep the text brief, relatable, and motivating. Our audience wants a quick read. The writing style should be conversational, not academic or overly technical. 8 | ``` 9 | 10 | 11 | ## Value Statement(s) 12 | 13 | 14 | ``` 15 | List the intended user(s) of this practice, the problem they're experiencing, and their end goal. 16 | 17 | Here's a model to follow: 18 | 19 | "I am [role] and I need to [learn how to... ensure that...] so that I can [end goal]." 20 | 21 | Include this line as many times as needed to capture the different personas that may benefit from this practice. 22 | 23 | To keep entries somewhat standardized, please select from the following personas: 24 | * Non-technical executive stakeholders 25 | * Technical executive stakeholders 26 | * Developers 27 | * QA 28 | * Project managers 29 | * Product managers 30 | ``` 31 | 32 | 33 | ## Nuances 34 | 35 | This section outlines common pitfalls, challenges, or limitations teams commonly encounter when applying this practice. The goal here is not to discourage you. Rather, the goal is to arm you with the appropriate context so that you can make an informed decision about when and how to implement the practice with your team(s). 36 | 37 | 38 | ``` 39 | Each nuance point should have its own section starting with `###` and the title followed by a brief description (3-5 sentences). Ensure that each section covers either a common pitfall, challenge, or limitation that teams commonly encounter when applying this practice. Aim to have somewhere between 2-5 nuance subsections. 40 | ``` 41 | 42 | 43 | ## Gaining Traction 44 | 45 | The following actions will help your team implement this practice. 46 | 47 | 48 | ``` 49 | The goal of this section is to show readers what it takes to guide a team starting with zero awareness to full adoption of this practice. There will be numerous social and technical hurdles that need to be handled. Each sub-section (action) listed here should be ordered chronologically for teams looking to gain traction. 50 | 51 | Each Gaining Traction action should have its own section starting with `###` and the title (e.g., Run a Retrospective). Then, add a brief description (3-5 sentences) of the action, including ways the team can generate buy-in, get practical experience, or make the practice a common part of the normal routine. These sections might be steps someone could take to get their team to adopt the practice. The goal is to equip someone who wants to introduce this practice to their team as turnkey as possible. 52 | 53 | For the actions you list here, you may want to take inspiration from the general practices in [Learning Culture](/capabilities/learning-culture.md#supporting-practices). Briefly discuss how to put those general practices to use by adding specifics such as unique talking points, demonstration instructions, roundtable discussion prompts, and links to external resources. 54 | 55 | For example, a general practice is to Facilitate a Roundtable Discussion. In the Run Pair Programming Sessions practice page, we elaborate on *how* to facilitate a roundtable discussion by listing **specific discussion prompts** related to pair programming, such as "How frequently do we engage in pair programming sessions, and are they integrated into our regular workflow?" 56 | 57 | Another example of a general practice is to Start a Book Club. In the Reduce Coupling Between Abstractions practice, we list specific books that can help put this practice to use, such as *Refactoring* by Martin Fowler and *Clean Architecture* by Robert C. Martin. 58 | 59 | Generally applicable [resources](/resources/), such as videos and books, may be leveraged to support an action; make sure to link to them here. If the resource you have in mind is super specific to the practice, then you can include a link and brief description here. 60 | 61 | See the Gaining Traction section in [other practice pages](/practices/) for more examples. 62 | ``` 63 | 64 | 65 | ## Success Criteria 66 | 67 | 68 | ``` 69 | How will the reader measure the impact/success of adopting this practice? Paint some pictures of what "done" looks like (since this will vary by team)? 70 | ``` 71 | 72 | 73 | ## Supported Capabilities 74 | 75 | This practice supports enhanced performance in the following capabilities. 76 | 77 | 78 | ``` 79 | The final section lists a handful of other DORA Capabilities (roughly 1-4) that are supported by this Practice. Each Capability you list here should have a title (starting with `###`) and brief description (2-4 sentences). The title should be an existing, linked DORA [Capability](/capabilities/) from the repository. The description text should cover how this particular practice supports the Capability listed. 80 | ``` 81 | 82 | -------------------------------------------------------------------------------- /templates/new-resource.md: -------------------------------------------------------------------------------- 1 | # `Title of Resource` 2 | 3 | Resource type: `resource type` 4 | 5 | `Link To Resource (if applicable)` 6 | 7 | 8 | ``` 9 | The body of each of these resource pages is likely to be very different. So, instead of providing you with a bunch of structure to fill in the blanks, we'll share some things you might want to consider when contributing a resource and then let you take it from there. 10 | 11 | ## What Sort of Resources Are Appropriate For This Section? 12 | 13 | 1. Code snippet(s) that showcase a technique or principle in action 14 | 2. Video / conference talk 15 | 3. Article 16 | 4. Book 17 | 5. Workshop 18 | 6. Code Kata 19 | 7. Roundtable discussion points 20 | 8. Anything else that helps people understand a concept or set of concepts 21 | 22 | ## When Should You Add a New Resource Page? 23 | 24 | Once you have a resource in mind, ask yourself the following two questions: 25 | 26 | 1. Are there multiple existing [practices](/practices/) that would benefit from this supporting resource? 27 | 2. Is this resource especially high-quality (i.e., something you want to/regularly share with peers for support)? 28 | 29 | If your answer to these questions is "yes," then proceed with adding it to the repository! 30 | 31 | ## What Sort of Notes Should I Include On This Page? 32 | 33 | This is where the format will likely vary greatly from resource to resource. Generally, the following content is helpful for these example resources: 34 | 35 | 1. Video 36 | a. timestamps where certain concepts are covered, where there are good points to ponder, or where you can pause and ask questions of your team 37 | b. brief description of the resource (that can be used by someone coordinating a watch party with their team) 38 | c. brief description of how this resource can bring value to viewers 39 | 2. Workshop 40 | a. slide deck 41 | b. speaker notes 42 | c. brief description of the resource (that can be used by someone coordinating a workshop with their team) 43 | d. brief description of how this resource can bring value to the team 44 | 3. Book 45 | a. core concepts discussed in the book and select chapters 46 | b. estimated time range for reading the book 47 | c. brief description of how this resource can bring value to the reader 48 | d. brief description of the resource (that can be used by someone coordinating a book club with their team) 49 | 50 | 51 | After you've thought through these questions and reviewed other [resources](/resources/), feel free to replace this section with whatever you deem appropriate. 52 | ``` 53 | 54 | --------------------------------------------------------------------------------