├── fogg_model.md ├── .DS_Store ├── jtbd-forces.png ├── images ├── job-canvas.png ├── jtbd-forces.png └── assumption-mapping.png ├── template.md ├── 7_powers.md ├── spikes.md ├── DIBBs.md ├── one_pager_template.md ├── sankey_diagram_needs.md ├── one_pager.md ├── product_brief.md ├── product_positioning.md ├── confidence_check.md ├── metric_pairing.md ├── dark_launch.md ├── prioritisation_rice.md ├── product_increments.md ├── critique.md ├── assumption_mapping.md ├── working_backwards.md ├── riskiest_assumption_test.md ├── user_testing.md ├── OKR.md ├── job_canvas.md ├── opportunity_solution_tree.md ├── kano_model.md ├── jtbd_interview.md ├── experimentation.md ├── product_kata.md ├── story_mapping.md ├── action_priority_matrix.md ├── heart_framework.md ├── prioritising_bugs.md ├── north_star_framework.md ├── design_sprint.md └── README.md /fogg_model.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/colivetree/product-playbook/HEAD/.DS_Store -------------------------------------------------------------------------------- /jtbd-forces.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/colivetree/product-playbook/HEAD/jtbd-forces.png -------------------------------------------------------------------------------- /images/job-canvas.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/colivetree/product-playbook/HEAD/images/job-canvas.png -------------------------------------------------------------------------------- /images/jtbd-forces.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/colivetree/product-playbook/HEAD/images/jtbd-forces.png -------------------------------------------------------------------------------- /images/assumption-mapping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/colivetree/product-playbook/HEAD/images/assumption-mapping.png -------------------------------------------------------------------------------- /template.md: -------------------------------------------------------------------------------- 1 | # TITLE 2 | 3 | ## Intro 4 | 5 | 6 | ## When to Run 7 | 8 | 9 | ## Why to Run 10 | 11 | 12 | ## Roles 13 | 14 | 15 | ## How to Run 16 | 17 | 18 | 19 | ## Tips and Resources 20 | 21 | 22 | 23 | ## Related plays: 24 | -------------------------------------------------------------------------------- /7_powers.md: -------------------------------------------------------------------------------- 1 | # 7 Powers 2 | 3 | ## Intro 4 | 5 | 6 | ## When to Run 7 | 8 | 9 | ## Why to Run 10 | 11 | 12 | ## Roles 13 | 14 | 15 | ## How to Run 16 | 17 | 18 | 19 | ## Tips and Resources 20 | 21 | 22 | 23 | ## Related plays: 24 | -------------------------------------------------------------------------------- /spikes.md: -------------------------------------------------------------------------------- 1 | # TITLE (WIP) 2 | 3 | ## Intro 4 | 5 | 6 | ## When to Run 7 | 8 | 9 | ## Why to Run 10 | 11 | 12 | ## Roles 13 | 14 | 15 | ## How to Run 16 | 17 | 18 | 19 | ## Tips and Resources 20 | 21 | 22 | 23 | ## Related plays: 24 | -------------------------------------------------------------------------------- /DIBBs.md: -------------------------------------------------------------------------------- 1 | # DIBBs 2 | 3 | ## Intro 4 | 5 | 6 | ## When to Run 7 | 8 | 9 | ## Why to Run 10 | 11 | 12 | ## Roles 13 | 14 | 15 | ## How to Run 16 | 17 | 18 | 19 | ## Tips and Resources 20 | 21 | 22 | 23 | ## Related plays: 24 | * [DIBBs Spotify](https://blog.crisp.se/2016/06/08/henrikkniberg/spotify-rhythm) -------------------------------------------------------------------------------- /one_pager_template.md: -------------------------------------------------------------------------------- 1 | # One Pager Template - Replace this with catchy title 2 | 3 | ## What problem are we solving 4 | * Define Key Hypothesis 5 | * Based on , we believe that , will result in 6 | 7 | ## How do we know it is a real problem 8 | 9 | * Evidence of willingness to pay 10 | * Market Size 11 | * Evidence of pain point, current workaround people are using to solve it 12 | 13 | ## How do we propose to solve the problem 14 | 15 | * How will it delight customers 16 | * Why is it hard-to-copy 17 | * Why does it have net positive (growing) margins / unit economics 18 | 19 | ## How will we know we’ve solved it? 20 | * Financial Goals 21 | * Milestones 22 | 23 | ## Why are we the right team to do it? 24 | * Why this team (who / what skills does it need) 25 | * Why now 26 | 27 | -------------------------------------------------------------------------------- /sankey_diagram_needs.md: -------------------------------------------------------------------------------- 1 | # Sankey Diagram of Needs - Flow Mapping 2 | 3 | ## Intro 4 | This play uses a well known visualization mechanism (flow mapping) to create a helpful prioritization and visual communication tool that lets you see why certain activities will be higher leverage than others by aggregating solutions to a number of user needs shared by multiple customer segments. 5 | 6 | ## When to Run 7 | When multiple independent solutions to a problem can be tested, when it's hard to understand and attribute conflicting views on the solution space to customer needs. 8 | 9 | ## Why to Run 10 | Sometimes the hardest part about prioritising is merely to be able to make sense of the space and reduce entropy by converting a wide array of solutions into a consolidated way of tackling a user problem. The Sankey Diagram of Needs organizes the solution space by the number of problem it solves for its addressable customer segments. 11 | 12 | ## Roles 13 | * Product Development Team 14 | 15 | ## How to Run 16 | * 1) Map target user segments 17 | * 2) Map main known user needs 18 | * 3) Map solutions in play. They should be as independent as possible. In particular, if one possible solution contains another, the exercise's utility is voided. 19 | * 4) Trace relationships between layers 1,2,3. 20 | * 5) Focus on areas of highest leverage (from right to left), represented by highest flow. 21 | 22 | ## Tips and Resources 23 | * [Sankey Diagrams](http://www.gameanalytics.com/blog/visualizing-dynamic-behavior-flow.html) 24 | * [Example visualisation]() 25 | 26 | ## Related plays: 27 | -------------------------------------------------------------------------------- /one_pager.md: -------------------------------------------------------------------------------- 1 | # One Pager 2 | 3 | ## Intro 4 | A one-pager is a well-known medium for creating shared context and articulating product vision in a simplified manner, outlining its key assumptions, goals and differentiating factors 5 | 6 | ## When to Run 7 | When starting a new product you want to be able to clearly articulate its benefits and its potential in a very simple way. This helps rally others behind the vision and clarifies the premises it lies on, forcing you to examine your idea critically. If good design is as little design as possible, good documentation is as little documentation as possible. 8 | 9 | ## Why to Run 10 | A powerful one-pager allows people to go from zero knowledge to a clear understanding of the problem it solves and its key assumptions in under 5 minutes. It's a conversation starter and an updated view of what the product needs to become. 11 | 12 | ## Roles 13 | Product Manager / Author / Creator 14 | 15 | ## How to Run 16 | 1) Write the one-page using a format like the one in the resources below: [see here](https://github.com/colivetree/product-playbook/blob/master/one_pager_template.md) - Keep it down to one page only! The rest of the details can live in a longer brief or a six-pager. 17 | 2) Talk about it, open it up for comments, let people challenge it 18 | 3) Review, rinse, repeat 19 | 4) Share it regularly when people ask what you're working on and let it evolve as new information comes in 20 | 21 | ## Tips and Resources 22 | [One-pager template](https://github.com/colivetree/product-playbook/blob/master/one_pager_template.md) 23 | 24 | ## Related plays: 25 | * [Working Backwards](https://github.com/colivetree/product-playbook/blob/master/working_backwards.md) 26 | * [Product brief](https://github.com/colivetree/product-playbook/blob/master/product_brief.md) 27 | -------------------------------------------------------------------------------- /product_brief.md: -------------------------------------------------------------------------------- 1 | # Product Brief 2 | 3 | ## Intro 4 | Writing a good product brief is a fundamental step to draw a common understanding from a mix of inputs, vision, strategy, constraints and ideas that typically precede product development. It can assume any number of shapes and formats, although some key elements should be present to guarantee there is enough clarity about the outcome or measure of progress we want to achieve, while only providing guardrails in terms of execution. 5 | 6 | ## When to Run 7 | At the outset of a project or product cycle, when a development team needs to rally behind a common goal and share a mental model of the problem they are trying to solve. 8 | 9 | ## Why to Run 10 | To write down and have a common reference for what the project is all about. In situations of uncertainty, a shared view is critical to reign back natural fears and uncertainties. 11 | 12 | ## Roles 13 | * Product Manager - in charge of writing the brief and communication it with the team 14 | * Development team - in charge of peer reviewing, questioning and committing to it. 15 | 16 | ## How to Run 17 | * 1) Clearly define the "What" - Define the problem you are solving 18 | * 2) Clearly state the "Why" - How do you know that is a real problem 19 | * 3) Define "Success" - How will you know you've solved it 20 | * 4) Lay down the guardrails - What's in scope / out of scope for your next cycle 21 | 22 | ## Tips and Resources 23 | * [Product Brief Template](Insertemplate.com) 24 | * [Intercom's How We Build Software](https://blog.intercom.com/how-we-build-software/) 25 | * [Julie Zhuo's Building Products](https://medium.com/the-year-of-the-looking-glass/building-products-91aa93bea4bb) 26 | 27 | ## Related plays: 28 | * [Positioning](https://github.com/colivetree/product-playbook/blob/master/product_positioning.md) 29 | * [First Principles]() 30 | -------------------------------------------------------------------------------- /product_positioning.md: -------------------------------------------------------------------------------- 1 | # Product Positioning 2 | 3 | ## Intro 4 | Defining your product positioning allows you to clearly define what you're creating in opposition to an equally valid stance. It gives your team clarity in making decisions by clearly contrasting what your product is and what it could have been - a hard trade-off needs to be present in any defined position. 5 | 6 | ## When to Run 7 | When starting a new product. When defining your unique value proposition. When conversations around the product experience are going in opposite directions. When your product feels bland / undifferentiated. When you need cross-team alignment. 8 | 9 | ## Why to Run 10 | It creates a clear mental model of what the product is for everyone who is involved in building it. It helps you make clear prioritisation decisions and reject feature requests by knowing exactly where you stand. It can easily be represented in a scale. 11 | 12 | ## Roles 13 | * Product Manager 14 | 15 | ## How to Run 16 | * 1. Define each of your principles in 3 dimensions: More of, Less of, Explanation 17 | ** More about: "Destination recommendations" 18 | ** Less about: "Explicitly asking users where they want to go based on their budget" 19 | ** Explanation: "We want to capture users' intent from their interactions, without an explicit need to request information from them" 20 | * 2. Chart your positioning on a scale/spectrum, establishing the gap between where you are and where you'd want to be. 21 | * 3. Have your product positioning visible for your teams at all times 22 | 23 | ## Tips and Resources 24 | [Position, Position, Position](https://m.signalvnoise.com/position-position-position-34b510a28ddc) 25 | [We don't sell saddles here](https://medium.com/@stewart/we-dont-sell-saddles-here-4c59524d650d) 26 | 27 | ## Related plays: 28 | * Product Brief 29 | * Product First Principles 30 | -------------------------------------------------------------------------------- /confidence_check.md: -------------------------------------------------------------------------------- 1 | # Confidence Check 2 | 3 | ## Intro 4 | A regular group exercise ran to assess the group's confidence in moving the work forward. Might be a design piece, an engineered feature or experiment, a new landing page or a prototype for user testing. In this exercise, the group compares the potencial impact and risk of its work with how confident they feel in moving it forward, laying out the next steps for each Work In Progress item. 5 | 6 | ## When to Run 7 | The confidence check is a nice cross-disciplinary exercise to run every week (or fortnight) as it gives everyone a space to make shared decisions and commitments while keeping concerns and risks visible and actionable. 8 | 9 | ## Why to Run 10 | Teams often struggle to make decisions with joint buy-in. Ensuring there are regular get-togethers where work is discussed openly and where the impact and goals of the work are clearly laid out, where conversations can be had from an early stage and people's work process and progress happens in the open, and where a joint framework for moving work forward can be agreed upon. 11 | 12 | ## Roles 13 | * Product Team 14 | 15 | ## How to Run 16 | 1) Conversation moves around the table, spotlight moving in turn to each member of the team 17 | 2) Each team member take 5-10mins to share what they've worked on last week, focusing on their latest explorations, what they've learned last week that led them there and what feedback they would need in order to make progress 18 | 3) The others voice their feedback in turn, offering feedback with a locus on: 1) how the solutions best connect to the goal, 2) what are the risks (business, user, technical) they see with each solution, 3) how confident they would be in shipping the solution now and why 19 | 4) the team decides what next steps they need to take next week to solve outstanding risks without which they can't ship 20 | 5) the team moves to the next team member 21 | 22 | ## Related plays: 23 | * [User Testing Sessions](https://github.com/colivetree/product-playbook/blob/master/user_testing.md) 24 | * [Confidence Check](https://github.com/colivetree/product-playbook/blob/master/confidence_check.md) 25 | -------------------------------------------------------------------------------- /metric_pairing.md: -------------------------------------------------------------------------------- 1 | # Metric Pairing 2 | 3 | ## Intro 4 | One of the common side-effects of focussing on a single metric optimisation is their blindness to context and an inability to account for ripple effects in areas that are just as important. In e-commerce, for example, Conversion Rate is often a paired metric with Total Basket Size, as streamlining the funnel experience can hurt the ability to upsell/xsell. Similarly, Ad CTRs can correlate inversely with Funnel Activation, as driving more users to a funnel with misleading copy can hurt overall product activation and create user renting side-effects. The idea then is to set a primary metric for a given goal, but assign it a paired metric. 5 | 6 | ## When to Run 7 | In goal-setting exercises, in metric optimisation exercises, scaling and validation plays where there are hidden business side effects that can't be captured by the scope and focus provided by a single metric. 8 | 9 | ## Why to Run 10 | Because users are multi-dimensional and metrics need to be used as a useful proxy for intended human behaviour. This means that while we can often nudge people in the intended business direction, we need to account for their hidden subversive consequences. We use a balancing, counteracting paired metric to do so in a measurable manner. 11 | 12 | ## Roles 13 | * Product Management, Data Analysts, Business Leadership 14 | 15 | ## How to Run 16 | * 1) Define a key metric to optimise for with experiment, feature launch or feature set 17 | * 2) Define counteracting metrics that can be used as a balancing force for the exercise (see resources for some examples) 18 | * 3) Accept improvements in key metrics ONLY IF there's no negative effect on paired metric (i.e. paired is flat or positive) 19 | 20 | ## Tips and Resources 21 | * [Matty Ford - Paired Metrics](https://mattyford.com/blog/2014/6/11/paired-metrics) 22 | * [Marc Andreessen on Paired Metrics](http://pmarcatweetsasblogposts.tumblr.com/post/73631082205/measure-performance-with-paired-metrics-for-best) 23 | * [Andy Grove's High Output Management on Pairing Indicators](https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884) 24 | 25 | ## Related plays: 26 | -------------------------------------------------------------------------------- /dark_launch.md: -------------------------------------------------------------------------------- 1 | # Dark Launch 2 | 3 | ## Intro 4 | Launching features / making changes on live products is risky, reasonably often brings down an entire company´s infrastructure and hurts revenue and growth, both directly and by affecting their relationships of trust with their users and partners. Facebook famously moved from their "Move Fast and Break Things" motto to the less catchy "Move Fast with Stable Infra" to account for the need to keep things running for their customers, while iterating fast to deliver value. The alternative would be analysis paralysis and severe risk aversion. Dark Launching is one of the many available techniques to mitigate technical risk before releasing a feature to real customers. It's based on asynchronously loading a feature or service in a production environment, running it as if it were visible for customers, while still keeping it "in the dark" / invisible to users. 5 | 6 | ## When to Run 7 | When a risky change (by nature of their criticality or number of customers affected) needs to be released and there is a significant chance it could impact customers. When performing major service surgery, infrastructural changes, critical element changes, redesigns or launching major features. 8 | 9 | ## Why to Run 10 | It ensures the necessary metrics, behaviours and functionality are in place before releasing a feature to people, who will be less forgiving and tolerant of flaws and failures. 11 | 12 | ## Roles 13 | * Developer Team 14 | * Product and Engineering Management 15 | 16 | ## How to Run 17 | * 1) Define key metrics and functionality to monitor - set up live dashboards for the team 18 | * 2) Determine how to asynchronously load the new service / feature while making it invisible to users 19 | * 3) Roll-out on a traffic ramp-up procedure, starting at a very low % of traffic and monitor both control and variant metrics 20 | * 4) Keep monitoring metrics and fixing forward until you can verify there's no breaking change in functionality so it can be safely released 21 | 22 | ## Tips and Resources 23 | * https://www.quora.com/What-is-a-dark-launch-in-terms-of-continuous-delivery-of-software 24 | * http://blog.launchdarkly.com/why-leading-companies-dark-launch/ 25 | 26 | ## Related plays: 27 | -------------------------------------------------------------------------------- /prioritisation_rice.md: -------------------------------------------------------------------------------- 1 | # RICE Prioritisation 2 | 3 | ## Intro 4 | Scoreboarding is not a one-size-fits-all prioritisation solution. There are natural risks associated with making decisions blindly, plainly on the merits of their quantitative merits. However, RICE (Reach, Impact, Confidence and Effort), establishes the main dimensions upon which you can base product decisions. 5 | 6 | ## When to Run 7 | When there are many competing priorities with no clear visibility as to how decisions are made. When there is shiny object syndrome. When you need to communicate decision making around the reasons behind the deprioritisation of a feature request. 8 | 9 | ## Why to Run 10 | Teams often feel as though prioritisation is dark magic, arbitrary and flexible to HiPPO. Having a clear strategy to pit competing priorities against each other is often helpful in illustrating why a certain piece of work should be picked up vs another. While there is a grey area between similar RICE estimates, this framework gives you a sense of how much you'd be diverting from the most impactful items because of shiny objects appearing before you and competing for your team's attention. 11 | 12 | ## Roles 13 | * Product Manager 14 | 15 | ## How to Run 16 | * 1) Establish your metrics 17 | ** Reach - How many people will see this change and how often in the defined timeframe. 18 | ** Impact - What's the expected impact for those customers 19 | ** Confidence - How confident are we that we'll see the desired impact 20 | ** Effort - How much effort do we need to put into the solution to get live and verify its reach and impact. 21 | * 2) Plot them on a scale with your highest and lowest priority items in opposite ends of the scale 22 | * 3) For similarly scored items decide whether to prioritise risk or reward depending on the state of your product development. I.e. whether to give more weight to 1) reach and impact - rewards or 2) confidence and effort - risks. 23 | 24 | ## Tips and Resources 25 | * [RICE Prioritisation by Intercom](https://blog.intercom.com/rice-simple-prioritization-for-product-managers/) 26 | * [ICE Prioritisation](http://www.mmsonline.com/columns/use-ice-to-set-business-priorities) 27 | 28 | ## Related plays: 29 | * Prioritisation Toolkit 30 | -------------------------------------------------------------------------------- /product_increments.md: -------------------------------------------------------------------------------- 1 | # Product Increment Sessions 2 | 3 | ## Intro 4 | To split work into smaller yet meaningful chunks and to use well defined timeframes to deliver work underlies many of the agile methodologies today. It allows teams to focus on delivering value fast but change and adapt when that value hasn't been realised for users and customers. Many start-ups and internet economy companies are now schedulling their major product increments in 4 to 6 week cycles, considered optimal to allow for sufficient scope management by individual teams, while giving organizations the ability to monitor, adapt and pivot if required. 5 | 6 | ## When to Run 7 | Define your cycle lead time and run at your defined cadence. Experiment with cycle length, ensuring your team has enough time to deliver on their goal in the direction you require them. 6 weeks is considered a good starting point. 8 | 9 | ## Why to Run 10 | Product Increments are not sprints, they should allow for flexibility, have higher level goals and give freedom of process for teams operating within this framework (e.g. teams can choose to use Kanban, Scrum, XP or any other technique). 11 | 12 | ## Roles 13 | * Whole product development team 14 | 15 | ## How to Run 16 | * 1) Have a prioritized list of high-level candidates you need to tackle - see [prioritization](). These should be problem statements or hypotheses that can be validated or invalidated in 6 weeks with an MVP or an increment on the current state of the product. 17 | * 2) Work with the team to develop an understanding of the desired outcome of the 6 weeks. Avoid lists of outputs, think instead of ideal end states, progress you want customers to make in case the problem is solved. 18 | * 3) Choose a definition play, diving into more details of the problem space - see [Defining](). 19 | * 4) In case there are mutiple teams, choose an alignment play to prioritize, select and distribute the work among the teams, making sure there aren't any dependencies, conflicts and execution aligns (whether through cooperation or competition) with the North star metrics at play. 20 | 21 | 22 | ## Tips and Resources 23 | * [Intercom - Why 6 weeks is the Goldilocks of product timeframes](https://blog.intercom.com/6-week-cycle-for-product-teams/) 24 | * [Apple's Product Development Process](https://www.interaction-design.org/literature/article/apple-s-product-development-process-inside-the-world-s-greatest-design-organization) 25 | * [Basecamp - How we structure our work](https://m.signalvnoise.com/how-we-set-up-our-work-cbce3d3d9cae) 26 | * [Scaled Agile Framework's Program Increments](https://en.wikipedia.org/wiki/Scaled_Agile_Framework#Program) 27 | 28 | ## Related plays: 29 | -------------------------------------------------------------------------------- /critique.md: -------------------------------------------------------------------------------- 1 | # Critique 2 | 3 | ## Intro 4 | A critique is a timeboxed process for teams to review work in progress with a format designed to get candid unbiased feedback by other specialists. You might run it with team members from different disciplines or bring in other specialists. If you want to get 5 | 6 | ## When to Run 7 | Typically focuses on work in its design stage, when any decisions the critique focuses on can still be challenged and the work reversed / fine-tuned. Do it too early (ideas and solutions not shaped enough) or too late (costly to change your project), and its effectiveness is dramatically reduced. 8 | 9 | ## Why to Run 10 | A critique exposes flaws, biases, blindspots and fragilities in your process. These might be intentional or not, but bringing a set of voices that haven't been exposed to the design allows you to strengthen and stress test your designs before they are materialized in a way that's too difficult to reverse. 11 | 12 | ## Roles 13 | * Presenter 14 | * Audience / Feedback team 15 | * Facilitator (optional) 16 | 17 | ## How to Run 18 | One way to run a critique that ensures feedback is as just and direct as possible and that everyone's voice is to do as follow: 19 | 1) Assign the presenter with a timeslot (2-10mins depending on the complexity of the proposal) 20 | 2) Put their designs up where everyone can see them and be exposed to their critical features - a physical representation is ideal but not necessary 21 | 3) Encourage team members to take lots of notes while the presenter is going through their designs 22 | 4) Once the presenter is finished give the team another 1-2mins to go through their notes and review the designs in silence 23 | 5) Everyone lays their notes out or attaches them to the design (e.g. post-its on a printed version) 24 | 6) Every team member gets a minute to go through their notes, explain the rationale guiding them and get the presenter's feedback 25 | 7) A summary of key takeaways based on shared notes is shared by the presenter or a facilitator 26 | 27 | ## Tips and Resources 28 | * [How to Run a Design Critique](https://scottberkun.com/essays/23-how-to-run-a-design-critique/) 29 | * [Creativity Inc](https://www.amazon.co.uk/dp/0593070097/ref=cm_sw_em_r_mt_dp_U_CKS4Db9PCH9JE) 30 | * [GV Guide to Design Critiques](https://library.gv.com/guide-to-design-critique-86ebf499bed5?gi=9e058d758892) 31 | * [Fast Co - Want to build a culture of innovation? master the design critique](https://medium.com/fast-company/want-to-build-a-culture-of-innovation-master-the-design-critique-2e302ee4a18a) 32 | 33 | ## Related plays: 34 | * [User Testing Sessions](https://github.com/colivetree/product-playbook/blob/master/user_testing.md) 35 | * [Confidence Check](https://github.com/colivetree/product-playbook/blob/master/confidence_check.md) 36 | -------------------------------------------------------------------------------- /assumption_mapping.md: -------------------------------------------------------------------------------- 1 | # Assumption Mapping 2 | 3 | ## Intro 4 | Every project starts with a set of assumptions. They do not always surface and not every team is able to call them out as such -u uncertainty is scary. However, clearly identifying and classifying them, helps guide your team through the naturally chaotic path of getting to a product delivery. Assumption Mapping is an exercise that forces you to state what you think as assumptions, grade them according to impact and certainty and then work to validate them starting with the most risky of them - getting into the habit of de-risking projects by delivering perceived value early and often. 5 | 6 | ## When to Run 7 | At the beginning of a project, when introducing a particularly complex or risky feature, or even after brainstorming competing ideas. 8 | 9 | ## Why to Run 10 | Forces teams to state their ideas as assumptions that need to be validated next to customers and other stakeholders. Creates a natural prioritization framework that looks at impact and uncertainty first. Sets the mood to align teams around their single riskiest assumption - one at a time. 11 | 12 | ## Roles 13 | * Facilitator 14 | * Team 15 | 16 | ## How to Run 17 | 1) Set the scene - create meaningful assumption categories / themes and provide examples. One split could be to categorize assumptions in terms of Desirability, Feasibility and Viability. Another split could look at these in terms of Desirability, Feasibility and Team Happiness. These should give you an idea of the types of example questions you should ask to prompt meaningful assumptions to arise. 18 | 2) Give your team 20-30 minutes to raise assumptions. Start with example questions and a couple of assumptions for each category and let them drive and discuss as they go along. 19 | 3) Identify any duplicate assumptions and cluster them. 20 | 4) Draw a chart with your spectrums of impact and certainty as two axis, forming 4 quadrants. One axis should go from least to most impactful, the other from completely certain to very uncertain. 21 | 5) Go through your assumptions, one at a time, and bring them into your chart where you and the team believe they should lie. 22 | 6) After the assumptions are raised, start with the most impactful and uncertain ones, then move [counter-clockwise](* [Printable Job Canvas](https://github.com/colivetree/product-playbook/raw/master/images/assumption-mapping.png) 23 | ). Devise actions to validate or invalidate them - your exercise is done, and your priorities have emerged in an organic form. 24 | 25 | ## Tips and Resources 26 | * [Assumptions Mapping](https://www.slideshare.net/7thpixel/introduction-to-assumptions-mapping-agile2016) 27 | * [Diana Kander's How to Diagnose Your Riskiest Assumptions](https://dkander.wordpress.com/2013/05/07/how-to-diagnose-your-riskiest-assumptions/) 28 | 29 | 30 | ## Related plays: 31 | * [Riskiest Assumption Test - RAT](https://github.com/colivetree/product-playbook/blob/master/riskiest_assumption_test.md) 32 | -------------------------------------------------------------------------------- /working_backwards.md: -------------------------------------------------------------------------------- 1 | # Working Backwards 2 | 3 | ## Intro 4 | Starting from the actions you would take when the product is finished, to align the team around a vision of the finished product. 5 | 6 | ## When to Run 7 | When you need to break down the high-level problem to be solved and visualize the finished product. This should be run when a team is starting to solve a problem or when it is stuck in the solution space with no results. 8 | 9 | ## Why to Run 10 | The Working Backwards product definition process is all about is fleshing out the concept and achieving clarity of thought about what we will ultimately go off and build. 11 | 12 | ## Roles 13 | - Mandatory: 14 | o Session Facilitator 15 | o Team Members 16 | 17 | ## How to Run 18 | 1. Start by writing the Press Release. Nail it. The press release describes in a simple way what the product does and why it exists - what are the features and benefits. It needs to be very clear and to the point. Writing a press release up front clarifies how the world will see the product - not just how we think about it internally. 19 | 2. Write a Frequently Asked Questions document. Here's where we add meat to the skeleton provided by the press release. It includes questions that came up when we wrote the press release. You would include questions that other folks asked when you shared the press release and you include questions that define what the product is good for. You put yourself in the shoes of someone using the product and consider all the questions you would have. 20 | 3. Define the customer experience. Describe in precise detail the customer experience for the different things a customer might do with the product. For products with a user interface, we would build mock ups of each screen that the customer uses. For web services, we write use cases, including code snippets, which describe ways you can imagine people using the product. The goal here is to tell stories of how a customer is solving their problems using the product. 21 | 4. Write the User Manual. The user manual is what a customer will use to really find out about what the product is and how they will use it. The user manual typically has three sections, concepts, how-to, and reference, which between them tell the customer everything they need to know to use the product. For products with more than one kind of user, we write more than one user manual. 22 | 23 | ## Tips and Resources 24 | * https://www.quora.com/Amazon-company-What-is-Amazons-approach-to-product-development-and-product-management/answer/Ian-McAllister 25 | * http://www.allthingsdistributed.com/2006/11/working_backwards.html 26 | * https://medium.com/bluesoft-labs/try-an-internal-press-release-before-starting-new-products-867703682934#.7iynwgu0t 27 | * [What is your product all about? – Prototyping: From UX to Front End](https://blog.prototypr.io/developing-products-by-questioning-the-vision-41ad14b2555e) 28 | * https://www.linkedin.com/pulse/working-backwards-press-release-template-example-ian-mcallister/ 29 | 30 | ## Related plays: 31 | Jobs-to-be-Done Sessions 32 | [design_sprint|Design Sprint] 33 | -------------------------------------------------------------------------------- /riskiest_assumption_test.md: -------------------------------------------------------------------------------- 1 | # Riskiest Assumption Test 2 | 3 | ## Intro 4 | One of the core tenets of the Lean Startup is that business assumptions need to be tested and that an MVP is good mechanism to prove or disprove a hypothesis that could make or break a product. The Riskiest Assumption Test is a framework for categorising and prioritising those assumptions according to the degree of risk/criticality to the product-market fit that they entail. 5 | 6 | ## When to Run 7 | Whenever a product is in heavy discovery and experimentation mode. Whenever a new initiative or development thread arises and needs validation from a problem, solution or execution point of view. Whenever there are multiple competing priorities in the backlog that seem large enough to make a difference and all fall broadly within the scope of the unique value proposition of a company or product. 8 | 9 | ## Why to Run 10 | Product-market fit is hard. Assumptions are often wrong to some degree and can miss the mark in terms of customer demand, competitive forces at play, attractiveness, price-point, brand, technical risk or a number of other factors. The Riskiest Assumption Test allows you to identify and clarify what your riskiest assumption is and prioritize your current execution for validating what is currently the biggest potential obstacle in the way of your start-up dreams. 11 | 12 | ## Roles 13 | * Product Manager 14 | 15 | ## How to Run 16 | * 1) List your Assumptions, classify them as Problem (is there a market need?), Solution (does this answer that need?) or Execution (is this the best thing we could/should be building to solve it?). These work on a priority order themselves. I.e. If there isn't a problem, there doesn't need to be a solution, if there isn't a solution there doesn't need to be an implementation. 17 | * 2) Formulate them as hypotheses - see [Experimentation](https://github.com/colivetree/product-playbook/blob/master/experimentation.md) for a way to do so. 18 | * 3) Classify their risk. There are 2 standard measures for risk assessment here: Likelihood and Severity, these work in a spectrum. Respectively, they go from "Very Unlikely" <-> "Extremely Likely" and from "Somewhat Annoying" <-> "Idea Killer". 19 | * 4) Prioritise them based on that assessment: 20 | ** 4.1) Problems > Solutions > Implementations 21 | ** 4.2) Extremely Likely Idea Killers > Very Unlikely Somewhat Annoying Stuff. 22 | * 5) Find a Validation Mechanism for your RAT: A Prototype, A Landing Page, A Sales Pitch, A Survey, A Qualitative Test and many can all be good candidate plays for validating your riskiest assumption. 23 | 24 | ## Tips and Resources 25 | * [Rik Higham's The MVP is Dead](https://hackernoon.com/the-mvp-is-dead-long-live-the-rat-233d5d16ab02) 26 | * [Strategyzer's Lean Startup Essentials](http://blog.strategyzer.com/posts/2015/4/23/5-lean-startup-essentials-to-reduce-risk-and-uncertainty) 27 | * [Diana Kander's How to Diagnose Your Riskiest Assumptions](https://dkander.wordpress.com/2013/05/07/how-to-diagnose-your-riskiest-assumptions/) 28 | * [Laura Klein - Identify and Validate Your Riskiest Assumptions](https://www.youtube.com/watch?v=SrzJqsedjC0) 29 | 30 | ## Related plays: 31 | -------------------------------------------------------------------------------- /user_testing.md: -------------------------------------------------------------------------------- 1 | # User Testing Sessions 2 | 3 | ## Intro 4 | User Testing is a qualitative validation mechanism for your product. It involves talking to prospective customers about their past experiences solving a similar or equivalent problem to that being tested, understanding how they've done it and then getting them to use our solution and talk through their issues, feelings and thoughts while doing so. It helps us answer whether we're solving a real problem, what is wrong with the prototyped approach and how / if it can constitute a replacement to their current resolution to the Job To Be Done. 5 | 6 | ## When to Run 7 | User Testing sessions are useful since they can be run at a very early stage of fidelity. They allow us to accelerate feedback loops and discover design flaws earlier. They can be run on new features, proposed changes, experiments and even in the main product line to deepen the team's knowledge of how the product is used and what should be launched. 8 | 9 | ## Why to Run 10 | To get early feedback on an implementation 11 | To understand how and why customers would use your product 12 | To understand how the product allows the customer to make progress and how it fits into their model for problem-solving 13 | To quickly get feedback on what might be preventing customers from making progress with the product 14 | 15 | ## Roles 16 | * User Researcher 17 | * Product Team 18 | 19 | ## How to Run 20 | * 1) Identify test to be run, focusing on a particular problem to be solved or job to be done addressed by your solution 21 | * 2) Recruit users who fall into the group of people we believe might be struggling with problem identified 22 | * 3) Have a researcher perform the interview. In smaller teams this might not be feasible, but it is critical that whoever runs the interview can adopt an unbiased stance to focus the interview. 23 | * 4) The researcher / interviewer should follow a script with the questions and list of problems they'd want the subject to test during the session. 24 | * 4) If possible, the team should watch the interviews in a separate room and collect and take notes on the customer's feedback. A simple but effective framework for those notes is to register the customer's background and what he's thinking, feeling and doing while using the product. 25 | * 5) The interviewer should focus the session on the job at hand, how the customer usually solves it, what he'd otherwise do and how they interact with the product. The session should not be focused on hypotheticals such as whether the potential customer would use or pay for the product. 26 | * 6) The interviewee should be encouraged to voice his opinions, thoughts and walk the interviewer through the actions he is taking in a descriptive manner. It's up to the interviewer to steer the conversation in the direction of the problem to be solved when it strays to areas that are not the focus of the current research. 27 | 28 | ## Tips and Resources 29 | * https://www.usability.gov/how-to-and-tools/methods/running-usability-tests.html 30 | * https://medium.com/@jhreha/how-to-run-a-quick-effective-user-test-for-25-or-less-bc2cf3706787 31 | * https://www.uxpin.com/studio/blog/how-to-run-an-insightful-usability-test/ 32 | * https://www.usertesting.com/ 33 | * https://www.nngroup.com/articles/live-intercept-remote-test/ 34 | * https://usabilityhub.com/ 35 | ## Related plays: 36 | -------------------------------------------------------------------------------- /OKR.md: -------------------------------------------------------------------------------- 1 | # OKR 2 | 3 | ## Intro 4 | OKRs are a well-known technique, popularised after Google successfully deployed them across the organisation as a measurable, tangible, Goal-setting exercise. OKR stands for Objectives and Key Results, laying a top-line goal along with a set of supporting quantitative key results that support its successful achievement. They're recognised for their use in setting organisational stretch goals and for providing adequate visibility of goal-setting across an organisation. 5 | 6 | ## When to Run 7 | When setting goals, typically on large timeframes. Although they can be used for Product Increments and even shorter cycles, they are better at providing direction along Quarterly (or longer) cycles, by setting metrics that move the needle and can typically retain the ability to NOT be wrong for the length of their application. 8 | 9 | ## Why to Run 10 | Running OKRs provides very clear and tangible objectives for a team and their hierarchical organisational structure guarantees teams realize their connection to the larger organizational progress and executive level goal attainment. They provide directional stretch goals that are hard to obtain and often involve deep problem space exploration, while establishing clear outputs that would move the needle at an organisational / departmental level. 11 | 12 | ## Roles 13 | * Whole product development team 14 | * Business leadership 15 | 16 | ## How to Run 17 | * 1) Develop an understanding of the business / department goal that overarches the team's 18 | * 2) Understand as a team what the most impactful initiatives the team can undertake are, that contribute to the goal stated above. Clearly state those initiatives in the form of Objective and Key Results. These answer the questions - "What problem are we solving?" and "How will we know we've solved it?" 19 | * 3) Key Results should be stretch goals. OKRs should be uncomfortable and feel like they are hard to achieve. As a rule of thumb, achieving 70+% of their stated outcome should be considered a success. More than that they have probably been too humble and less they were probably too ambitious and can be demoralizing. 20 | * 4) Teams should be prepared to defend their OKRs and how they connect to the overarching goal next to business leadership, who are responsible for challenging the OKRs and their impact on the organisation. Ensure OKRs are interpreted correctly as directional guard-rails and can be challenged if they are discovered to be inept or their connection to the higher level OKR is weaker than assumed. The team should reserve a portion of their time for other non-OKR activities as technical debt, discovery exercises and product and engineering broken windows need to be accounted for. Evolving business needs and interactions with other teams' goals discovered during the running cycle should also be addressed during non-OKR time. This should typically be 30-40% of total team time. 21 | 22 | ## Tips and Resources 23 | * [Rick Klau's OKR Session](https://www.youtube.com/watch?v=mJB83EZtAjc) 24 | * [re:Work with Google](https://rework.withgoogle.com/guides/set-goals-with-okrs/steps/introduction/) 25 | * [Using OKRs by @niket](https://medium.com/startup-tools/okrs-5afdc298bc28) 26 | * [OKRs Didn't Work For Us by Contactually](http://blog.contactually.com/inside-contactually-okrs-didnt-work-for-us/) 27 | 28 | ## Related plays: 29 | -------------------------------------------------------------------------------- /job_canvas.md: -------------------------------------------------------------------------------- 1 | # Job Canvas 2 | 3 | ## Intro 4 | The Job Canvas is a tool to align a team behind a high level job-to-be-done (JTBD). When that job is conceptually clear and can be defined, there are lots of moving parts required to execute on it for a customer. The Job Canvas is a visualisation and a framework to think about delivering a job to be done. 5 | 6 | ## When to Run 7 | This play can be run when you need a tool to clarify the problem you're trying to solve with your team. It can be printed and placed next to the team, used digitally (it's easy to replicate in Trello, for example) or it can be used as a starting point for a project within a session with your team. 8 | 9 | ## Why to Run 10 | It stimulates collaboration out in the open, and focuses on learning by delivering incremental value to customers. It stems from the need to deeply understand customer's motivation to hire your product for the job and gives the team constant visibility on the current state of play. 11 | 12 | ## Roles 13 | - Mandatory: 14 | o Session Facilitator (Product Manager) 15 | o Team Members 16 | 17 | ## How to Run 18 | 19 | * 1. State the high-level JTBD. One of the ways to state it is to use Alan Klement's formulation of "When <[ A moment of frustration ]>, <[ how progress is visualized ]> so I can <[ how life is better ]>" 20 | * 2. Gather some of the insights you have collected from your customers related to that JTBD. They can be moments of struggle, motivations, data that you've observed that proves or can potentially disprove. Place them in the insights column. 21 | * 3. Define your hypotheses. They represent an assumption around what some of the smaller problems are and how they can be solved. A nice format for these can be found using the [Hypothesis Kit](http://experimentationhub.com/hypothesis-kit.html). 22 | * 4. Define your measure of success. This is important to do at the beginning. Iterate only when you have either moved closer to your vision by achieving your near term goal or invalidated parts of your execution strategy changing what success should look like. 23 | * 5. Draft your UX / your architecture. This section gives your team an understanding of what you believe your solution should look like. It can be a rough mock, a hi-fidelity prototype, a flow diagram or a system architecture. It should represent your current belief and should increase in fidelity and resolution over time. 24 | * 6. Define your experiments. These can be qualitative or quantitative and you can create 3 spaces within with your current stack of riskiest assumption tests, your successes and your failures if you want to. The goal is to give you visibility into what you've done already to solve the job, what worked and what didn't and what you want to try next. 25 | 26 | ## Tips and Resources 27 | * [Printable Job Canvas](https://github.com/colivetree/product-playbook/raw/master/images/job-canvas.png "Job Canvas") 28 | * [Replacing the User Story with the Job Story](https://jtbd.info/replacing-the-user-story-with-the-job-story-af7cdee10c27) 29 | * [Job Story Format](https://medium.com/@alanklement/definitely-interesting-thanks-for-your-input-7f05edab4250) 30 | * [Hypothesis Kit](http://experimentationhub.com/hypothesis-kit.html) 31 | * [How Not To Run an AB Test](http://www.evanmiller.org/how-not-to-run-an-ab-test.html) 32 | 33 | 34 | ## Related plays: 35 | * Jobs-to-be-Done Interviews 36 | * Product Kata 37 | -------------------------------------------------------------------------------- /opportunity_solution_tree.md: -------------------------------------------------------------------------------- 1 | # Opportunity Solution Tree 2 | 3 | ## Intro 4 | The Opportunity Solution Tree is an exploration exercise popularised by Teresa Torres that provides a working method for teams looking to solve consumer problems. Teresa recommends teams start by looking at their desired outcome and then connect those outcomes to the Oportunities they have at hand. These should come from research or data and they need to clearly present themselves as means to achieve the desired outcome. Solutions are the next step, and they should also clearly align with each of the Opportunities. If they don't, the team should discard them immediately. Under each proposed solution, teams look at defining experiments that validate their feasibility, viability and desirability, as a means to test their riskiest assumptions. 5 | 6 | ## When to Run 7 | When considering how to achieve a new goal, when struggling to find a mechanism to tackle a large solution space for a given problem, when building an experimentation backlog to track a stretch goal (like an OKR). 8 | 9 | ## Why to Run 10 | Going from a goal to a set of actionable items is hard work. Teresa's framework gives Product people the tools to turn these actions into tangible hypotheses that teams can start testing, even in moments of limited information, where they'd have to prioritise opportunities largely based on gut feel. 11 | 12 | ## Roles 13 | * Product Manager 14 | * Team - bring in as many reference frames as possible (Data, Design, Research, etc.) 15 | 16 | ## How to Run 17 | 1) State your *Desired Outcome* clearly. An example could be: "Users can find the right product in less than 4 taps" 18 | 2) Start laying out your *Opportunites*. Look at any data or research insights you may have collected. If you start solutionizing but can't quite put your finger on why (other than gut feeling) that solution solves your desired outcome, try to look at your problem space again, to clarify the opportunity that would require it. 19 | 3) Go wide on *Solutions*, ask the team to think laterally and come up with creative ways to solve that same problem. They should clearly align with the *Opportunities* you have identified. If they don't, discard them. At this point your teams should be developing a good notion of what opportunities look the most promising. 20 | 4) Rank your *Opportunities* to develop a better notion of where you'd want to start experimenting. 21 | 5) Start from the highest ranked opportunities and think of ways to validate your solutions, think about what your riskiest assumptions are. You'll be devising *Experiments* that you could run to ensure, as cheaply as possible, that the opportunity is real, and your solutions resonate with users. 22 | 6) Pick your most impactful *Experiments*. Go out and start testing things! 23 | 24 | ## Tips and Resources 25 | * [Product Talk - Opportunity Solution Tree](https://www.producttalk.org/2016/08/opportunity-solution-tree/) 26 | * [OST Layout](https://www.producttalk.org/wp-content/uploads/2016/08/opportunity-solution-tree-1.png) 27 | * [Intro to Modern Product Discovery](https://medium.com/productized-blog/an-introduction-to-modern-product-discovery-by-teresa-torres-productizedconf-bb2703b01fdb) 28 | 29 | 30 | ## Related plays: 31 | * [Riskiest Assumption Test - RAT](https://github.com/colivetree/product-playbook/blob/master/riskiest_assumption_test.md) 32 | * [Experimentation](https://github.com/colivetree/product-playbook/blob/master/experimentation.md) 33 | -------------------------------------------------------------------------------- /kano_model.md: -------------------------------------------------------------------------------- 1 | # Kano Model 2 | 3 | ## Intro 4 | Applying the Kano Model ((named after professor [Noriaki Kano](https://en.wikipedia.org/wiki/Noriaki_Kano)) to Understand and Define Customer Needs can tell you a lot about what attributes of your product are essential, superfluous, delightful or just indifferent. It can also tell you when what you're proposing is just plainly the opposite of what customers believe they want. It gives us a practical scoring framework for product characteristics and attributes, which can help prioritize features, remove them from the product or just narrow the team's focus down to what features are most likely to correlate with strong user satisfaction. 5 | 6 | ## When to Run 7 | When developing a deeper understanding of the product, when trying to fight feature bloat or scope creep, when prioritising between competing functionality, when tailoring the product to a certain cohort of users. 8 | 9 | ## Why to Run 10 | The Kano Model is an intuitive way to think about customer needs and their satisfaction with the execution of your product (or its current prototype). It provides a measurable way to think about the relationship between customer needs and the feature set you believe your product requires to achieve market fit. 11 | 12 | ## Roles 13 | * Product Manager / User Researcher 14 | * Surveyed User / Test Group 15 | 16 | ## How to Run 17 | * 1) Define the set of questions you want to ask your users. These should be stated in relation to the features or ideas you want to put to the test. Phrase each set of questions per feature in 3 ways: Functional, Disfunctional and Impact Scale. 18 | ** An example for a bookmarking feature for a video streaming website might be: 19 | *** FUNCTIONAL: If you can bookmark your favourite videos, how do you feel? 1-5 scale, expressed qualitatively. See 20 | *** DISFUNCTIONAL: If you can't bookmark videos, how do you feel? 21 | *** IMPACT SCALE: How important is it or would it be if you could bookmark videos? 22 | * 2) Add any illustrations/prototypes or carefully provide assisting data to your question, which can help the customer visualize what your proposal is. Distribute the survey with any customer feedback tool or simply using Google Forms, SurveyMonkey or Typeform. 23 | * 3) Analyze the results, plot the customers' answers and identify where in the spectrum your features lie and what their relative importance is. Prioritise feature execution or improvement accordingly. A good analysis spreadsheet is provided by Daniel Zacarias, author of Folding Burritos, mentioned below in Tips and Resources. 24 | * 4) Note that users' evaluation of features change over time. In what's commonly called the Decay of Delight effect, it is naturally for initially delightful features to evolve into must-haves (due to natural customer adaptation, competitors copying and making the feature standard and expected, among other reasons). 25 | 26 | ## Tips and Resources 27 | * [Folding Burritos - The Kano Model](https://foldingburritos.com/kano-model/) 28 | * [Mind The Product - Using the Kano Model](http://www.mindtheproduct.com/2013/07/using-the-kano-model-to-prioritize-product-development/) 29 | * [iSixSigma - Customer Needs Are Ever Changing](https://www.isixsigma.com/tools-templates/kano-analysis/kano-analysis-customer-needs-are-ever-changing/) 30 | * [Center for Quality of Management Journal](http://www.walden-family.com/public/cqm-journal/2-4-Whole-Issue.pdf) 31 | * [UX Mag - Leveraging the Kano Model for Optimal Results](http://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results) 32 | 33 | ## Related plays: 34 | -------------------------------------------------------------------------------- /jtbd_interview.md: -------------------------------------------------------------------------------- 1 | # JTBD Interview 2 | 3 | ## Intro 4 | A JTBD Interview is one form of user interview meant to surface users' motivation, situational context and demand-side decision parameters involved in a purchasing decision. 5 | This allows you to generate a model of products fit into users' lives, what they are hired to do, what are the forces that shape adoption (both positive and negative) and how you can build and distribute products that allow customers to make real progress. 6 | 7 | ## When to Run 8 | This is a session that, like other forms of UXR, can be conducted at any point in order to capture better detail about usage and adoption of your product, understand your role in users' lives and what problems you solve for them, 9 | but is particularly suited to helping you shape your roadmap, prioritize your company / team-level bets and start shaping your UX to suit key user needs and motivations. 10 | 11 | ## Why to Run 12 | A set of JTBD interviews allows you to build a solid map of the jobs users hire your product for, which then allows you to build better products, that fit the particulars of peoples' lives. 13 | 14 | ## Roles 15 | * Interviewer 16 | * Interviewee 17 | * Interviewing Team 18 | 19 | ## How to Run 20 | 0) The primary goal of a JTBD interview is to discover the primary forces 1) driving the user towards purchase and 2) steering them away from it. The way to map these is to try to reconstruct the user's timeline to buying the product (which can go back years) and understand the compounding effect of these forces in the purchase decision. 21 | 1) To reconstruct the timeline, start backwards from the purchase decision and work with the user to reconstruct the various moments that preceded it. 22 | Don't be afraid to get into the weeds with them. When exactly did they buy it, where they were, who they were with, when did they first learn about it and how, what else they considered, how much did it cost, how did they decide it was worth spending that money, what were they looking for when they bought it, how are they using it. 23 | 2) This conversation should paint a clear picture of what 4 types of forces. Push, Pull, Habits and Anxieties. The first two are positive, and compel the user to buy the product. The latter 2 are negative, and drive the user away from it. Mapping these will allow you to uncover the moments, motivations and outcomes users want from the product - hence why they hire it and what for. 24 | 3) After the conversation with your user, debrief with the team and categorize your insight into the 4 types of forces described above. Use the format: 25 | "When I want to..." for pushes, and "so I can..." for pulls. 26 | 4) A lot of teams then try to synthesize and prioritize JTBD via clustering techniques. Some interesting ones are simple euclidian method like the one presented by Ryan Singer, and that contained in Strategyn's Outcome Driven Innovation, or a "simpler" team exercise in synthesis. See the resources below for detail. 27 | 28 | ## Tips and Resources 29 | * https://www.intercom.com/blog/podcasts/podcast-bob-moesta-on-jobs-to-be-done/ 30 | * https://www.youtube.com/watch?v=ek0yAdEkbgA 31 | * https://soundcloud.com/jobstobedone/mattress-interview-live-jtbd-interview-debrief-analysis-jasonfried 32 | * https://twitter.com/rjs/status/829426195183894532/photo/1 33 | * https://pt.slideshare.net/marklittlewood/bob-moesta-founder-the-rewired-group-jtbd-workshop 34 | * https://www.youtube.com/watch?v=qQFUHapOJsQ 35 | * https://www.youtube.com/watch?v=2ecwXEnQ6xY 36 | * [JTBD Forces Diagram](https://github.com/colivetree/product-playbook/blob/master/images/jtbd-forces.png) 37 | -------------------------------------------------------------------------------- /experimentation.md: -------------------------------------------------------------------------------- 1 | # Experimentation 2 | 3 | ## Intro 4 | Experimentation has grown to be widely adopted by all major internet economy companies as a quantitative validation tool. The most well-known form of experimentation is AB Testing, where the regular version of a product (called Control) is compared against a modified version (the B Variant or the Challenger) and checked for effects on a company's guiding metrics (or Overall Evaluation Criteria, OEC). The results are then measured and teams make a decision on whether to incorporate the new changes into the main product. Typically teams will run a series of experiments, iterating on a concept, to validate (or invalidate) their hypotheses and solutions. 5 | 6 | ## When to Run 7 | Experimentation is typically run when teams have enough traffic to empirically and scientifically test whether a change will produce the intended effects with a cohort of real users - the change's target users. Teams using experimentation should ensure they have adequate levels of traffic, they understand the problem they are trying to solve and they have ensured the impact of their changes can and should be measured through an experiment. 8 | 9 | ## Why to Run 10 | Experimentation allows teams to assess with a pre-defined level of certainty whether a change to a product will produce its intended effect. Typically teams choose the standard level of confidence used in scientific experimentation (95%) and run the experiment for a number of business cycles on a large enough group of users to verify whether an impact is produced. This allows teams to test their assumptions cheaply, without assuming their ideas (or their execution) will work for users, spending months of development effort. It's a powerful learning tool to understand how people use a product. 11 | 12 | ## Roles 13 | * Development team 14 | * Users 15 | 16 | ## How to Run 17 | 18 | * 0) Familiarize yourself with common experimentation pitfalls. This play assumes a *frequentist* approach to AB Testing. For a Bayesian approach you need to establish priors, not contemplated on this guide. 19 | * 1) Define a hypothesis based on a known insight or customer problem - A good resource to understand how these should be formulated is the [hypothesis kit](http://www.experimentationhub.com/hypothesis-kit.html) 20 | * 2) Establish how that hypothesis can be tested - What changes need to be made, what metrics it will affect (OEC). 21 | * 3) Determine the Minimum Detectable Effect (MDE), the change to the metrics that can be measured within a certain timeframe (which should also be defined). A good resource is the [AB Test Guide Calculator's Pre-Test Analysis](https://abtestguide.com/calc/). Choose a two-sided test if you wish to be confident about negative effects as well. Typically, tests are run at 95% confidence (p<0.05). 22 | * 4) Run the experiment for the defined period of time - NOT until you reach statistical significance. 23 | * 5) Collect results and analyse them according to your OEC. 24 | * 6) Choose to accept or reject your experiment results, incorporating them into your main codebase. 25 | 26 | ## Tips and Resources 27 | * [Evan Miller - How not to run an A/B Test](http://www.evanmiller.org/how-not-to-run-an-ab-test.html) 28 | * [Microsoft Experimentation](http://exp-platform.com/large-scale/) 29 | * [Experimentation Hub](http://www.experimentationhub.com) 30 | * [Experimentation at Airbnb]( https://medium.com/airbnb-engineering/experiments-at-airbnb-e2db3abf39e7) 31 | * [Booking.com's Chasing Statistical Ghosts]( https://blog.booking.com/is-your-ab-testing-effort-just-chasing-statistical-ghosts.html) 32 | * [Dan McKinley - Design for Continuous Experimentation]( http://mcfunley.com/design-for-continuous-experimentation) 33 | * [Hilary Robert's Science and Sensibility - Thoughts on Experimentation and Growth](https://vimeo.com/189598824) 34 | * [ConversionXL - Frequentist vs Bayesian Experimentation](https://conversionxl.com/bayesian-frequentist-ab-testing/) 35 | 36 | ## Related plays: 37 | -------------------------------------------------------------------------------- /product_kata.md: -------------------------------------------------------------------------------- 1 | # Product Kata, by Melissa Perri 2 | 3 | ## Intro 4 | Inspired by the Toyota Kata, and its subsequent application in Kanban by Hakan Fross, the Product Kata, proposed by [Melissa Perri](http://melissaperri.com/) is a no-frills approach to product development, which gives teams a systematic approach to look at their daily activities to discover customer problems, hypothesise, test, learn and adapt (or Plan, Do, Check, Act) - all iterating towards a near-term goal (the Next Target Condition), a near-term next step towards the pursuit of a large-scale vision. It's a procedural framework that does not make any assumptions about the underlying agile methodology but keeps teams in focus and in flow, one goal at a time, while allowing them to pivot in a structured manner in case the current hypothesis is invalidated. 5 | 6 | ## When to Run 7 | Most products spend the vast majority of their time in discovery mode, after uncovering a high-level problem, they work hard to understand the details, the jobs-to-be-done, what the solution would be and how to implement their version of it. The Product Kata is a very good way to state the goals, present the actions and align on the findings while doing product discovery at full steam. 8 | 9 | ## Why to Run 10 | It provides teams with a methodology-agnostic way to discover the next steps, align on high-level goal and next small step towards it, while forcing teams to clearly outline their current status, obstacle, measure and expectations. 11 | 12 | ## Roles 13 | * Product Manager 14 | * Product Development Team 15 | 16 | ## How to Run 17 | * 0) Set the **Direction**. This is the Company Goal, OKR, or other top-line metric we are looking to improve over a (potentially) long period of time. Example: Have a 1-click-checkout solution that improves conversion rate by 20%. 18 | * 1) Understand the **Current Condition**. In a product development environment, this means understanding what users are doing at the time. Collecting these insights can be done through data collection, user interviews, usability testing, surveys and other feedback mechanisms. Example: Repeat customers now have to go through the whole form even though they've already filled in their details. 19 | * 2) Define the **Next Target Condition**. Understanding your ideal end state allows you to target an intermediate stage that feels achievable and represents an improvement over your current condition. Example: We want to improve conversion for repeat users by 5%, by having their details available within the current checkout flow. 20 | * 3) Define the **Obstacle**. This can be any internal or external factor preventing you from achieving your Next Target Condition or from making progress towards it. Example: We don't know if introducing a sign-up mechanism will affect conversion for first-time users. 21 | * 4) Define the **Next Step**: Whatever the next action is that the team is going to take to overcome the obstacle. Example: Start asking for a password and introduce a registration button for first-time users. 22 | * 5) Define the **Expected Outcome**: What we expect will happen once we perform the step. Example: We expect to maintain levels of form progression as the additional field will not constitute a barrier to completion. 23 | * 6) Evaluate what we have **Learned**: This is what we have observed in face of our expected outcome. In this example: We have seen that we have introduced a 1% relative percentage drop in conversion for first-time users. We want to observe the effect of having saved details for returning users to understand whether this is effect is net positive. 24 | * 7) Ask whether you've met the Target Condition. If not, repeat the process from #2. 25 | 26 | ## Tips and Resources 27 | 28 | * [Melissa Perri - The Product Kata](http://melissaperri.com/2015/07/22/the-product-kata/) 29 | * [Hakan Forss - The Kanban Kata](https://hakanforss.wordpress.com/tag/kanban-kata/) 30 | * [Toyota Kata](https://en.wikipedia.org/wiki/Toyota_Kata) 31 | * [Plan, Do, Check, Act - PDCA](https://en.wikipedia.org/wiki/PDCA) 32 | 33 | 34 | ## Related plays 35 | -------------------------------------------------------------------------------- /story_mapping.md: -------------------------------------------------------------------------------- 1 | # Story Mapping 2 | 3 | ## Intro 4 | Story Mapping is an extremely common way for teams to group their features into themes / epics and to visualize which ones can and should be accomplished within a milestone. 5 | 6 | ## When to Run 7 | At the beginning of large, complex initiatives, where we know we'll need to break work down and get a better understanding of what we'll leave for latter stages and why. It is critical that teams run a story mapping as a goal-building exercise and a critical analysis of what they do and don't know. This should take from 2h to about a half-day to be effective, don't let it run over a day, even for massive complex projects where groups can tackle separate parts of the project. 8 | 9 | ## Why to Run 10 | * Get a clearer understanding of what we think needs to be included in each milestone (v1/v2/v3, alpha/beta/live or now/next/later) 11 | * Get a good idea of what risks are involved with each workstream and each story, lay out assumptions so they can be tackled based on their risk (see also [assumption mapping](https://github.com/colivetree/product-playbook/blob/master/assumption_mapping.md)) 12 | 13 | ## Roles 14 | * Product Manager 15 | * Development Team 16 | * UX Designer 17 | * Other Key Team Members 18 | 19 | ## How to Run 20 | This is a fairly simple work breakdown exercise, where the most interesting insights come from how one can run it in a multidisciplinary manner and get the most actionable immediate feedback available, which will allow the team to move forward with the right level of confidence, executing on current unknowns. 21 | 0) Find a massive whiteboard, one you can glue post-its to, draw schematics and wireframes, brainstorm around ideas. Leave some room for the actual story map, where you'll need each high-level story group to be clustered under their theme / epic 22 | 1) If you haven't yet, map High Level Functionality - your stories - into Themes / Epics. These might be based on Functional Modules, Jobs-To-Be-Done, or another representation of user needs. Choose a representation that makes sense for your team and doesn't leave all the functionality in one bucket. You can also do this later, from the bottom up, by finding related stories, clustering them and letting themes emerge from team discussion. 23 | 2) Get all the stories you have up on your whiteboard. 24 | 3) Draw a line that represents your first version, your MVP. This will help ground your first discussions around what is essential and key to solve your first hypotheses as your new product or feature set is brought to market. 25 | 4) Give people time to go through the stories and ask questions freely. You can do this in two ways: One, have someone read out and explain the user stories and let people gather notes in the form of questions, silently. Two: do it as a brainstorming exercise, with a facilitator attaching the most significant a) Questions and b) Assumptions to the stories. Going through the stories also allows everyone to identify dependencies. 26 | 5) Allow a group discussion to emerge around the Questions, turn them all into either Assumptions or directly into acceptance criteria into your stories. This will create a direct line between what you know, what you think you know, and what you have to assume in order to ship and why. This helps teams make informed trade-offs. 27 | 6) For everything non-critical / not included in your first stage, it's fine to leave room for uncertainty. You'll learn a ton while building out the first version, so not every question needs to be answered. It is however, recommended, that teams still make Questions and Assumptions explicit and visible, so they can be either be ironed out later or turned into [spikes](https://github.com/colivetree/product-playbook/blob/master/spikes.md) they can run to remove that uncertainty. 28 | 7) Draw new lines for subsequent milestones, add your stories to each and make sure they're marked as tentative. This is an exercise in transparency. 29 | 30 | 31 | ## Tips and Resources 32 | * https://www.agilealliance.org/glossary/storymap/ 33 | * https://miro.com/templates/user-story-map/ 34 | 35 | 36 | ## Related plays: 37 | * [Assumption Mapping](https://github.com/colivetree/product-playbook/blob/master/assumption_mapping.md) 38 | -------------------------------------------------------------------------------- /action_priority_matrix.md: -------------------------------------------------------------------------------- 1 | # Action Priority Matrix 2 | 3 | ## Intro 4 | Using an Action Priority Matrix can be extremely helpful when you are struggling to visualize and prioritize a number of competing initiatives. It creates a hard split between those things that can and should be tackled now (the quick wins), those that you need to understand more about but can be positively disruptive (the moonshots), those that seemed seductive but aren't actually that impactful (the snacks), and those that are definitely not good candidates for the time being and need to be reconsidered (your recycling bin). 5 | 6 | ## When to Run 7 | Run the action priority matrix when at a crossroads. Consider it when there are a number of potentially impactful activities you could be pursuing, when your team has a very large scope or when you need to clearly communicate what you're expecting to achieve. 8 | 9 | ## Why to Run 10 | Prioritizing work can be extremely hard, and can feel lonely if done by the Product Manager alone. One way to get people with different frames of reference to contribute (and for them to be invested in decision-making), is to have them in the room when these decisions are made. While ultimately the call should be belong to those responsible for the product position, everyone should be accountable and be able to challenge those decisions. The Action Priority Matrix gets everyone on the same page, by giving them a common framework to think about prioritization and by explicitly leveling the playing field to compare potential solutions using the same dimensions. 11 | 12 | ## Roles 13 | * Product Team 14 | 15 | ## How to Run 16 | 0) Draw your action priority matrix, with 4 quadrants. Left-to-Right, Top-Down: 17 | * Low Effort / High Impact 18 | * High Effort / High Impact 19 | * Low Effort / Low Impact 20 | * High Effort / Low Impact 21 | 1) Ask your team to rank initiatives in the effort scale. 22 | 2) Drag your initiatives up/down according to the you think they'll have. (You can reverse 1 and 2, beginning with impact and then asking the team to rate initiatives on effort) 23 | 3) Now look at what you have and interpret the 4 quadrants. 24 | * Low Effort / High Impact: No-brainers! 25 | ** These initiatives are usually called quick-wins. They're right there for the taking and you can validate them cheaply. If the potential impact is really high, take them immediately. Otherwise, bring them in as you have the chance for some nice gains at low cost. 26 | * High Effort / High Impact: Moonshots 27 | ** These are typically your big projects. You have really high hopes for them, but they're super risky. Think of how you can start small and scale them. Nail down what its key tenets are, break them apart and start bringing them to market bit by bit. Reduce your batch size, increase your learning rate and get to the future faster. Big things await! 28 | * Low Effort / Low Impact: Snacks 29 | ** These are also called fillers. You bring them into your product cycles because they improve the quality of your product overall, but they're not incredibly significant or impactful. These maintain your performance, but you shouldn't do them at the expense of others. 30 | * High Effort / Low Impact: Recycling 31 | ** This is your trouble zone. This is where good teams go to die. These projects will take a large amount of effort for a tiny reward. Re-think them and, if you need to, recycle them. There's a reason you had considered them in the first place, but think about whether there's a different framing you can use for them, or a smaller scope. Think of ways to get them out of this quadrant. If there aren't any, no problem, just ditch them and move on. 32 | 33 | ## Tips and Resources 34 | * [Action Priority Matrix, Mind Tools](https://www.mindtools.com/pages/article/newHTE_95.htm) 35 | * [Jason Morin - APM](https://www.linkedin.com/pulse/action-priority-matrix-jason-morin/) 36 | 37 | ## Related plays: 38 | * [Reach Impact Confidence and Effort - RICE](https://github.com/colivetree/product-playbook/blob/master/prioritisation_rice.md) 39 | * [Sankey Diagram of Needs](https://github.com/colivetree/product-playbook/blob/master/sankey_diagram_needs.md) 40 | * [Riskiest Assumption Test - RAT](https://github.com/colivetree/product-playbook/blob/master/riskiest_assumption_test.md) 41 | -------------------------------------------------------------------------------- /heart_framework.md: -------------------------------------------------------------------------------- 1 | # HEART - Goals Signals Metrics 2 | 3 | ## Intro 4 | In 2001, a study by Normal and Nielsen looked at the impact of usability improvements in a company's bottom line. They used 4 key metrics. Satisfaction, Error Rate, Time to Task Completion and Success Rate. Today, these metrics are used in many shapes and forms by Internet Economy companies. The HEART framework bring user experience metrics into the heart of the goal setting exercise by teams, allowing them to look at their goals through a number of lenses that complement each other when creating delightful user experiences. Google has popularised the framework tying its customer-centric dimensions of analysis back to the kind of objectivity that makes engineering organisations tick with Goals, Signals and Metrics. 5 | 6 | ## When to Run 7 | When setting goals for new initiatives or projects. When trying to avoid metric blindness and consider a range of user-centric perspectives. When setting high-level action plans for products. 8 | 9 | ## Why to Run 10 | It's easy for Internet Economy companies to be trapped in local optima. They'll choose a metric and blindly optimise for it. They'll forget about the effect that certain profit-driven decisions might have on customer satisfaction or softer measures like trust, which can influence long-term retention. HEART brings attention to these perspectives and gets teams to consider these problems from all sorts of different angles. 11 | 12 | ## Roles 13 | * Product, Design, Engineering Teams. 14 | 15 | ## How to Run 16 | 1) Look at your new product or feature and define what you want to achieve for each of these 5 dimensions: 17 | * a) Happiness 18 | ** This is a qualitative measure that you could track with a survey. NPS is one way, questions around Satisfaction and Ease of Use are other tools at your disposal to proxy how happy people are when using your product. 19 | * b) Engagement 20 | ** Engagement is largely quantitative. You can measure engagement on the amount of times people visit, how often they come back, your ratio of DAUs/MAUs. The time people spent listening to music on your playlist app. It tells you you've taken hold of a bit of their lifes and are now embedding yourself into their routines. 21 | * c) Adoption 22 | ** Adoption is how you proxy for growth. How many new people are interested in what you're building, are they upgrading to your next version or to your next pricing tier. 23 | * d) Retention 24 | ** Will they come back for more, is one question and how frequently is another one that retention looks at. Retention metrics such as Xday retention, churn, renewals and repeat purchases are ways to assess your stickiness. Does a certain cohort of customers think of you as a necessary part of their life, or would they just as easily use your competitor? 25 | * e) Task Completion 26 | ** Are you helping people make progress? What's the key job-to-be-done you're getting hired for. Follow people's success rates and you're following a good measure of your helpfulness. This might be an imperfect proxy in isolation, but alonside some of the other metrics, it makes for a very powerful tool. 27 | 2) Understand how could influence these. Define Goals, Signals and Metrics to help you track progress for *each dimension above*. 28 | * x) Goals 29 | ** Say you're revamping your search functionality. You might be tempted to think of your adoption goal as - 10% more people using search on the website. This is a red flag. Look for customer progress. You could say your goal is: to make it easier for more people to find the right product through search. 30 | * y) Signals 31 | ** Signals are actions that indicate you're on the right track. For the above, number of new users using search is a good signal. Returning customers adopting search for the first time is another one. 32 | * z) Metrics 33 | ** Finally metrics turn your signals into numbers you can track on a dashboard. % of sessions with search usage could be one. % of new users tapping search box is another one. Share of search usage vs menu navigation could be a third. Measure for what you're looking to optimize. 34 | 35 | ## Tips and Resources 36 | * [dtelepathy - UX Metrics](https://www.dtelepathy.com/ux-metrics/#engagement) 37 | * [How to effectively measure UX](https://uxplanet.org/how-to-effectively-measure-ux-with-google-heart-framework-4a497631d224) 38 | * [Usability returns on ROI](https://www.nngroup.com/reports/usability-return-on-investment-roi/) 39 | * [Usability metrics](https://www.nngroup.com/articles/usability-metrics/) 40 | 41 | ## Related plays: 42 | * [Metric Pairing](https://github.com/colivetree/product-playbook/blob/master/metric_pairing.md) 43 | -------------------------------------------------------------------------------- /prioritising_bugs.md: -------------------------------------------------------------------------------- 1 | # Prioritising Bugs, Technical Debt and Broken Windows 2 | 3 | ## Intro 4 | One of the most mundane yet trickier parts of a Product person's role is to balance shipping features, improvements and experiments with fixing bugs, devoting time to code refactoring or working on broken windows, whether from a technical or from a design point of view. There are multiple ways to turn this into a process. The current approach involves a delicate balance that keeps the whole team wholly involved in both discovery and in fixing, keeping the conversation flowing at all times between all members of the team. Other approaches (such as rotating members between discovery and Business As Usual) may be a better play for others. 5 | 6 | ## When to Run 7 | This play can be run any time defects and technical debt start flowing into a system and taking up a large share of a team's time (say >10%). This happens when systems reach a certain level of maturity or production-readiness and should be kicked-off early to avoid building up. 8 | 9 | ## Why to Run 10 | Systems are complex, often have many underlying dependencies and are built on assumptions that evolve over time, as teams discover new knowledge or their environment changes. This should be acknowledged and incorporated into their process. 11 | 12 | ## Roles 13 | * Product Manager (prioritization) 14 | * Product Development Team 15 | 16 | ## How to Run 17 | * 1) Label bugs, minor improvement tasks and technical debt projects based on their criticality - this can evolve over time, think of an underlying codebase refactor whose criticality builds up over time. These, like risks, should mainly be weighed based on their likelihood and impact. 18 | An example of an immediately understandable grading scale can be: 19 | * Show-stopper 20 | * Super Frustrating 21 | * Definitely Annoying 22 | * Mildly Inconvenient 23 | * Just Looks Odd 24 | This avoids 1-5 scales, which can be confusing (which way is up?). 25 | * 2) Prioritize them separately to their criticality grade. Although Show-stoppers will almost always need to be tackled immediately, separating the priority decision allows you to keep a polished product and make soft trade-offs vs new features. For example, a design improvement or minor bug fix that "Just Looks Odd", can be prioritised above a Mild Incovenience if it has been sitting there for a while due to new things always popping up. Time / Age is an important criteria and having a broken product will eventually start annoying your repeat customers, no matter how small it looks on the surface. 26 | * 3) Allocate a set amount of time to these issues every sprint/cycle depending on whether this list of BAU is growing / remains stable or is decreasing in size. Typically it shouldn't exceed 30% of your team's time, ultimately its growth is a symptom of other underlying issues with your process or your quality mechanisms that should be fixed instead. Doing this early and consistently should ensure it's kept at reasonable levels. Ensure all your team members are picking up bugs and other improvements. You Build It You Run It, ensures Accountability and promotes Mastery. 27 | * 4) This process will not avoid other major projects creeping up. Often a strategical decision to choose another platform, programming language or infrastructure provider can cause non-feature work to take priority and constitute major projects. This should still be weighed against discovery work. A business case should be built by its proponents not based on risk but on potential/estimated ROI - for example, moving to a cloud service could decrease response times, lead time for feature delivery or reduce number of bugs. These can then be weighed on Upside vs Effort and then categorised as Moonshots (High effort, High reward), No-brainers (Low effort, High reward), Snacks (Low Effort, Low Reward) and Zombies (High effort, Low reward). Even then, ensure any major projects and refactors don't take up all of your team are done incrementally - for example by splitting out small chunks of your monolith architecture when moving into a microservice-based project. 28 | 29 | ## Tips and Resources 30 | * [Bugs on the Product Backlog](https://www.mountaingoatsoftware.com/blog/bugs-on-the-product-backlog#comments) 31 | * [Mind the Product - Prioritisation 101](http://www.mindtheproduct.com/2012/06/product-prioritisation-101/) 32 | * [Prioritising Defects](http://www.softwaretestinghelp.com/how-to-set-defect-priority-and-severity-with-defect-triage-process/) 33 | * [Stackoverflow - When is a big rewrite the answer](https://softwareengineering.stackexchange.com/questions/6268/when-is-a-big-rewrite-the-answer) 34 | 35 | 36 | ## Related plays: 37 | -------------------------------------------------------------------------------- /north_star_framework.md: -------------------------------------------------------------------------------- 1 | # North Star 2 | 3 | ## Intro 4 | Finding a North Star and orienting your strategy towards optimizing for that north star behaviour is pretty common. What does finding a North star mean? It means that you find a customer behaviour which you can represent with a metric that single-handedly represents the core value your product delivers to your customers. It is super helpful to align all your product development efforts behind this single elusive, metric. It is harder, as your product grows in complexity and scale, to find one single metric that represents the company's strategy. One example of a company achieving this is Amplitude, who shared their North Star framework recently. Theirs is "Weekly Learning Users: the count of active Amplitude users who have shared a learning that is consumed by at least two other people in the previous seven days.” 5 | 6 | ## When to Run 7 | In strategic prioritization sessions. You can do this at the company, product area, or team level when choosing a single area of focus for an extended time period. This isn't a "project" metric, it's a product metric. It needs to be sufficiently predictive of value delivered to the customer that it is useful, and sufficiently sensitive that your actions move the needle. As a counter example, LTV is a great measure of value generated by a customer over a long-enough period of time, but it might lag your product/marketing/pricing/other changes by months. Your north star is supposed to be the metric that predicts LTV because your customers are accruing value they can't get elsewhere. 8 | 9 | ## Why to Run 10 | Having a north star is extremely valuable. The metric won't be perfect, but it will be sufficiently predictive that it will allow your team(s) to define a product discovery roadmap that contributes to actual customer value. If you can drive this behaviour, through whatever tactics, you're likely nailing the key JTBD your customers come to you for. 11 | 12 | ## Roles 13 | * PM 14 | * Team 15 | * External Experts 16 | * Data Analysts / Scientists - specialists who can help you identify those proxies that drive real organisation growth and value delivery. 17 | 18 | ## How to Run 19 | Note: This playbook is largely based on Amplitude's method. 20 | 1) Open Intro Discussion: Clarify the problem space. 21 | * What problems are we solving as a team? 22 | ** What would it look like if we had a greater sense of impact in our work? 23 | ** Do we all clearly understand our product strategy? What prevents us from having clarity? 24 | ** How is our investment in product now connected to future business performance? What’s the relationship between product work and our financial results (think WLU to LTV relationship)? 25 | 2) Game Selection: Identify the game you're playing 26 | * Attention: Social networks, consumer media, gaming. they try to capture your attention for as long as possible and then find ways to monetize that attention (ads, in-app purchases etc) 27 | * Transaction: Marketplaces, Ecommerce, Fintech. They try to get you to transact within their platform as much as possible. 28 | * Productivity: Figma, Salesforce, Intercom, Confluence. They try to get you to do your job better, faster, with less errors and get you to where you need to go. 29 | 3) Input Selection: 30 | * Identify the key behaviours that you want to incentivize within your customers that reflect your key value proposition and your business model. Those are the inputs to your north star formula. 31 | * A common input pattern is to think about Breadth, Depth, Frequency, and Efficiency. For example for instacart, metrics along these dimensions could be: 32 | ** Breadth: number of customers placing orders each month 33 | ** Depth: number of items within an order 34 | ** Frequency: number of orders completed per customer each month 35 | ** Efficiency: Percentage of orders delivered on time 36 | 4) Pretend North Star: 37 | * Pick a product you use every day and select a North Star for it, with inputs (your function's parameters), the north star and its impact (what the business impact of effecting that north star would be) 38 | * Discuss your north stars as a group to brainstorm potential pitfalls and advantages 39 | 5) Silent Design: 40 | * Everyone goes away and thinks about that same exercise for your business 41 | 6) Pair and Share 42 | * You pair with one other person and share the rationale, the risks and opportunities associated with your north star 43 | 7) Converge 44 | * You come together as a group and converge on a common canvas with your inputs, north star and business impact. Don't forget this is just the start of your journey, you now need everyone else to be bought in too! 45 | 46 | ## Related plays: 47 | * [Amplitude's North Star Framework](https://amplitude.com/north-star/about-the-north-star-framework) 48 | * [Sean Ellis' - What is a North Star](https://blog.growthhackers.com/what-is-a-north-star-metric-b31a8512923f) 49 | -------------------------------------------------------------------------------- /design_sprint.md: -------------------------------------------------------------------------------- 1 | # Design Sprint (Draft) 2 | 3 | ## Intro 4 | The sprint is a five-day process for answering critical business questions through design, prototyping, and testing ideas with customers. 5 | 6 | ## When to Run 7 | When the teams needs to align around the main problem they’re solving and focus their efforts on the customer needs. This may happen when the team is starting a new project, deciding whether to build a new feature or hasn’t been able to produce a visible impact on the customers with their current work. 8 | 9 | ## Why to Run 10 | This play allows us to go from hypothesis to prototype in 5 days, allowing you to test it with real users and collect feedback that will inform future development. 11 | 12 | ## Roles 13 | - Mandatory: 14 | o Facilitator 15 | o Decider 16 | o Team - total of 7 people max. 17 | o Domain Experts 18 | o 5 Users 19 | 20 | ## How to Run 21 | 0. Make a hard decision on the problem you're trying to solve, decide on one single problem, focus on being able to invalidate it in 5 days. It is very likely that the Sprint will. Find 5 days when all stakeholders, facilitator and decider are available full-time. [Plan](https://library.gv.com/sprint-week-set-the-stage-99f2f29ce0e7) material, rooms and tools before the sessions begin. 22 | 2. [Day 1 - Map](https://library.gv.com/sprint-week-monday-4bf0606b5c81#.c8y9sirq8): The first day is fully dedicated to defining, understanding and modeling the problem space. From clearly outlining the problem we're trying to solve, to mapping the user decision flow, to hearing from multidisciplinary experts that may be involved with the problem, it's designed to give your team a thorough understanding of the problem before agreeing to solve it. The day ends on that agreement, with the team deciding on the Sprint Target. 23 | 3. [Day 2 - Sketch](https://library.gv.com/sprint-week-tuesday-d22b30f905c3): The second day is devoted to exploring solutions to the target problem to be solved. It starts with demos of other companies and solutions you believe address the same issue. Then the day becomes about picking up on the insights collected thus far and sketching potential solutions. Everyone will have a chance to silent vote, present their ideas and participate in the selection process. Ultimately, however, the decision is made by the Decider, in a process designed to avoid group-think. This is Day 3. 24 | 4. [Day 3 - Decide](https://library.gv.com/sprint-week-wednesday-900fe3f2c26e#.dla3y4wa6): Day 3 is about tapping into the brainpower and cumulative knowledge in the room and making a decision, while attempting to avoid a decision made by persuasion rather than merits (to the degree this is possible). There are a number of techniques proposed to allow people to evaluate and vote on their favourite proposals before they have been pitched, while also allowing people to change their minds once exposed to the thought processes behind each prototype by its creator. The decision may evolve during the day, while combining, selecting and storyboarding the winner(s). The day ends with a full storyboard of the prototype to be tested, which will be built the following day. 25 | 5. [Day 4 - Prototype](https://library.gv.com/sprint-week-thursday-df8d7c8c0555): The fourth day is when magic happens and the storyboard evolves into a real, testable version with the appropriate level of fidelity. It's worth noting that teams need to be extremely careful not to overshoot. There's only so much that can be done in a day to create a coherent, testable prototype and it's imperative to focus on the right level of fidelity and then ensuring a coherent story can be built around it. Depending on the skills of the participants, this could range from a paper prototype to an interactive HTML or a sequence of screenshots linked together. The important part is that the interview script and backstory and the prototype form a coherent picture that inverviewees can easily understand and make sense of. 26 | 6. [Day 5 - Test](https://library.gv.com/sprint-week-friday-7f66b4194137#.6035fkh04): The final day is when the interviewer performs a user testing session. While preferably they should have done it before, that is not mandatory and there are great insights to be drawn from this session if ego is cast aside and the focus is on the customer's problem, her journey and how the prototype is understood from that point of view. See more on [User Testing Sessions](https://github.com/colivetree/product-playbook/blob/master/user_testing.md) 27 | 3. Analyse, consolidate and share results. It is quite likely that the Design Sprint will only be a starting point and further user testing will be necessary before the solution is properly defined. Don't be hesitant to schedule subsequent user testing on revamped prototypes before building. It is an exponentially cheaper process than building something no-one wants or understands. 28 | 29 | ## Tips and Resources 30 | * http://www.gv.com/sprint/ 31 | * https://blog.prototypr.io/your-vision-matters-running-a-gv-design-sprint-in-the-real-world-5fc353ba5b3c 32 | * [Invision](https://www.invisionapp.com/) 33 | 34 | ## Related plays: 35 | * User Sessions 36 | * Practice Interviews 37 | * Jobs-to-be-done Sessions 38 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # The Product Playbook 2 | A Playbook of Product Management Techniques 3 | 4 | ## Intro 5 | A Product playbook is something I've wanted to do for a very long time. 6 | Product management is an art, insofar as everything that requires human intervention to create something is art. But there is causality in what product people do and their outcomes, how the interactions between product managers, designers, developers and other stakeholders turn vision into usable objects or interfaces. This playbook aims to become a cooperative effort to describe the multiple ways one can turn insights and ideas into products that solve problems. 7 | 8 | The concept of the playbook is inspired by sports. The best explanation I've seen is the one by Jon Lax in ["Great Products Don't Happen By Accident"](https://medium.com/great-products-dont-happen-by-accident/great-products-dont-happen-by-accident-f46323d8ad94) 9 | 10 | The product playbook gives people plays they can apply to specific moments in the product development workflow. Each play should follow the same easily readable format and should be easy to pick-up and follow. Some of these plays originate from companies like Google, Facebook, Amazon, Intercom, Spotify, Amplitude or Basecamp and used industry-wide, others have been developed by individual practitioners, tried and tested in the field and represented here. 11 | 12 | Some of the companies that openly share how they develop product are: 13 | * [Atlassian](https://www.atlassian.com/team-playbook/plays) 14 | * [Intercom on Product Management](https://www.intercom.com/books/product-management) 15 | * [Basecamp - Getting Real](https://basecamp.com/about/books/Getting%20Real.pdf) / [Basecamp - Shape Up](https://basecamp.com/shapeup) 16 | 17 | There are also these fantastic lists of everything product: 18 | * https://github.com/tron1991/open-product-management 19 | * https://platformsandnetworks.blogspot.co.uk/p/resources-product-management.html 20 | * https://www.sachinrekhi.com/top-resources-for-product-managers 21 | * https://www.lennyrachitsky.com/p/my-favorite-templates-issue-37?u 22 | 23 | ## The Plays 24 | 25 | ### Vision & Strategy 26 | * [Product brief](https://github.com/colivetree/product-playbook/blob/master/product_brief.md) 27 | * [Positioning](https://github.com/colivetree/product-playbook/blob/master/product_positioning.md) 28 | * [DIBBs](https://github.com/colivetree/product-playbook/blob/master/DIBBs.md) 29 | * [Working Backwards](https://github.com/colivetree/product-playbook/blob/master/working_backwards.md) 30 | * [One-pager](https://github.com/colivetree/product-playbook/blob/master/one_pager.md) 31 | * [7 Powers](https://github.com/colivetree/product-playbook/blob/master/7_powers.md) 32 | 33 | ### Goal Setting 34 | * [OKRs](https://github.com/colivetree/product-playbook/blob/master/OKR.md) 35 | * [Metric Pairing](https://github.com/colivetree/product-playbook/blob/master/metric_pairing.md) 36 | * [HEART & Goals, Signals, Metrics](https://github.com/colivetree/product-playbook/blob/master/heart_framework.md) 37 | * [North Star Framework](https://github.com/colivetree/product-playbook/blob/master/north_star_framework.md) 38 | 39 | ### Product Discovery 40 | * [Design Sprint](https://github.com/colivetree/product-playbook/blob/master/design_sprint.md) 41 | * [User Testing Sessions](https://github.com/colivetree/product-playbook/blob/master/user_testing.md) 42 | * [JTBD Interview](https://github.com/colivetree/product-playbook/blob/master/jtbd_interview.md) 43 | * [Opportunity Solution Tree](https://github.com/colivetree/product-playbook/blob/master/opportunity_solution_tree.md) 44 | 45 | ### Alignment 46 | * [Product Increment Sessions](https://github.com/colivetree/product-playbook/blob/master/product_increments.md) 47 | * [Story Mapping](https://github.com/colivetree/product-playbook/blob/master/story_mapping.md) 48 | * [Job Canvas](https://github.com/colivetree/product-playbook/blob/master/job_canvas.md) 49 | 50 | ### Testing 51 | * [Experimentation](https://github.com/colivetree/product-playbook/blob/master/experimentation.md) 52 | * [Riskiest Assumption Test - RAT](https://github.com/colivetree/product-playbook/blob/master/riskiest_assumption_test.md) 53 | * [Confidence Check](https://github.com/colivetree/product-playbook/blob/master/confidence_check.md) 54 | * [Design Critique](https://github.com/colivetree/product-playbook/blob/master/critique.md) 55 | * [Dark Launch](https://github.com/colivetree/product-playbook/blob/master/dark_launch.md) 56 | 57 | ### Prioritising 58 | * [Reach Impact Confidence and Effort - RICE](https://github.com/colivetree/product-playbook/blob/master/prioritisation_rice.md) 59 | * [Sankey Diagram of Needs](https://github.com/colivetree/product-playbook/blob/master/sankey_diagram_needs.md) 60 | * [Action Priority Matrix](https://github.com/colivetree/product-playbook/blob/master/action_priority_matrix.md) 61 | * [Bugs, Technical Debt and Broken Windows](https://github.com/colivetree/product-playbook/blob/master/prioritising_bugs.md) 62 | * [Kano Model](https://github.com/colivetree/product-playbook/blob/master/kano_model.md) 63 | 64 | ### Validated Learning 65 | * [Spikes](https://github.com/colivetree/product-playbook/blob/master/spikes.md) 66 | * [Melissa Perri's Product Kata](https://github.com/colivetree/product-playbook/blob/master/product_kata.md) 67 | 68 | 69 | ## Contributing to the Product Playbook 70 | * [Play Format](https://github.com/colivetree/product-playbook/blob/master/template.md) 71 | * [Guidelines for contributing] 72 | --------------------------------------------------------------------------------