├── .gitignore ├── README.md ├── SUMMARY.md ├── about.md ├── career-framework └── readme.md ├── catalogs ├── data │ ├── README.md │ ├── addresses.md │ ├── dates.md │ ├── fixed_point.md │ ├── floating_point.md │ ├── integers.md │ ├── money.md │ ├── phones.md │ ├── strings.md │ └── zipcodes.md └── error_catalogs.md ├── exploration ├── README.md └── writing_exploratory_charters.md ├── folklore.md ├── heuristics ├── bugreporting.md ├── mobile.md ├── reporting.md ├── stopping_rules.md └── strategy.md ├── htsm.md ├── portfolio_examples.md ├── quicktests.md └── test-design ├── README.md ├── automate.md ├── domain.md ├── functional.md ├── oracles.md ├── regression.md ├── security.md ├── test_techniques.md └── unit.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Testing Guide 2 | 3 | This is my testing guide. It is filled with catalogs, heuritics, guidewords, cheat sheets, lists, links and other models I find useful. 4 | 5 | _This is still very much a work in progress as I figure out how to best focus and use it._ 6 | 7 | * Sometimes I like catalogs 8 | * Sometimes I like checklists 9 | * Sometimes I like guides 10 | * Sometimes I like cheat sheets 11 | * I almost always love models 12 | 13 | ## Other 14 | 15 | * [Hendrickson, Elizabeth's Test Heuristics Cheat Sheet](https://s3.eu-west-1.amazonaws.com/matrix.assets/fa2aa6x6kkxle2lv5xm720wltja3) 16 | * [Hunter, Michael's You Are Not Done Yet Checklist](http://thebraidytester.com/downloads/YouAreNotDoneYet.pdf) 17 | * [http://www.qualityperspectives.ca/resources\_mnemonics.html](http://www.qualityperspectives.ca/resources_mnemonics.html) 18 | 19 | ## Questions, Suggestions 20 | 21 | * Twitter: [@ckenst](http://twitter.com/ckenst) 22 | * Edit on GitHub 23 | 24 | ## Make Your Own 25 | 26 | If you like this idea, clone the repo and make it yours! 27 | 28 | 29 | ## TODO: 30 | 31 | 1. Add Timezones and Time related catalogs 32 | 2. Find a better way make this list / catalog actionable. -------------------------------------------------------------------------------- /SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | 3 | - [Home](README.md) 4 | - [Catalogs] 5 | - [Data Catalogs](catalogs/data/README.md) 6 | - [Fixed Point](catalogs/data/fixed_point.md) 7 | - [Floating Point](catalogs/data/floating_point.md) 8 | - [Integers](catalogs/data/integers.md) 9 | - [Phone Number](catalogs/data/phones.md) 10 | - [Strings](catalogs/data/strings.md) 11 | - [Zip + Post codes](catalogs/data/zipcodes.md) 12 | - [Error Catalogs](catalogs/error_catalogs.md) 13 | - [Exploratory Testing](exploration/README.md) 14 | - [Writing Exploratory Charters](exploration/writing_exploratory_charters.md) 15 | - [Test Design](test-design/README.md) 16 | - [Automating](test-design/automate.md) 17 | - [Domain Testing](test-design/domain.md) 18 | - [Functional Testing](test-design/functional.md) 19 | - [Regression Testing](test-design/regression.md) 20 | - [Security Testing](test-design/security.md) 21 | - [Unit Testing](test-design/unit.md) 22 | - [Test Techniques](test-design/test_techniques.md) 23 | - [Test Oracles](test-design/oracles.md) 24 | - [Quicktests](quicktests.md) 25 | - [Heuristics] 26 | - [Bug Reporting](bugreporting.md) 27 | - [Test Reporting](reporting.md) 28 | - [Stopping Rules](misc/stopping_rules.md) 29 | - [Mobile](misc/mobile.md) 30 | - [Test Strategy](strategy.md) 31 | - [Examples of Testing Portfolios](misc/portfolio_examples.md) 32 | - [Testing Folklore](misc/folklore.md) 33 | - [HTSM](htsm.md) 34 | - [About Test Idea Catalogs](misc/about.md) 35 | - [Tester Career Framework](/career-framework/readme.md) 36 | -------------------------------------------------------------------------------- /about.md: -------------------------------------------------------------------------------- 1 | # About Test Idea Catalogs 2 | 3 | You can develop a standard set of tests for a specific type of object \(or risk\) and reuse the set for similar things in this and/or later products. Marick \(1994\) suggested that testers develop these types of lists and called them test idea catalogs. 4 | 5 | Having a good catalog to reference should be part of every software tester + developers toolbox. The following are meant to help with specific and more generalized testing, to provoke ideas for finding failures. 6 | 7 | ## Other Catalogs: 8 | 9 | * [Chapter 3 of Lessons Learned](http://www.testingeducation.org/BBST/testdesign/KanerBachPettichord_Lessons_Learned_in_SW_testingCh3-1.pdf) 10 | * [Falsehoods Programmers Believe About Names](http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/) 11 | * [Marick's Short Catalog of Test Ideas](http://www.exampler.com/testing-com/writings/short-catalog.pdf) 12 | 13 | ## Resources: 14 | 15 | * [http://sce.uhcl.edu/helm/rationalunifiedprocess/process/workflow/test/co\_tstidsctlg.htm](http://sce.uhcl.edu/helm/rationalunifiedprocess/process/workflow/test/co_tstidsctlg.htm) 16 | 17 | -------------------------------------------------------------------------------- /career-framework/readme.md: -------------------------------------------------------------------------------- 1 | # Test Engineering Career Framework 2 | 3 | ## What’s a Career Framework? 4 | 5 | The Engineering Career Framework is your source for how to achieve impact for your role and team and how to grow in your engineering career. For managers, it can help you set expectations with your teams and hold them accountable for their work. 6 | 7 | ## What the Career Framework is not 8 | 9 | This framework is not a promotion checklist for your role; rather, it’s designed to help you figure out what your impact could look like at the next level. 10 | 11 | This framework is not an exhaustive list of examples and behaviors; each responsibility includes three to four key behaviors that serve as guide for how to think about your work. Consequently, you’ll need to meet with your manager to define your impact goals and align on the expectations for your role. 12 | 13 | ## What’s in a Career Framework? 14 | 15 | This framework is broken down into two components: 16 | 17 | - Level Expectations define the scope, collaborative reach, and levers for impact at every level; these expectations are the what that determines the difference between an IC3 and IC4, for example 18 | - Core and Craft Responsibilities define the key behaviors specific to your role and team; these behaviors help you identify how you work to deliver impact based on your level expectations 19 | 20 | ## How to navigate this framework 21 | 22 | Dropbox measures the success of its engineers largely on business impact. Anchor your work first and foremost on creating long-term impact. Since impact can be a bit vague, read What is Impact? 23 | 24 | Next, ground yourself in the expectations for your level and team. For each level, you’ll find a one-line summary description and the role’s scope, collaborative reach, and levers for impact. 25 | 26 | Review the expected behaviors for that level across the Results, Direction, Talent, Culture pillars from the Core Responsibilities. Read your Craft expectations, which are the per discipline technical capabilities you need to master at that level. Finally, meet with your manager to set your goals for the quarter. 27 | 28 | https://dropbox.github.io/dbx-career-framework/ 29 | -------------------------------------------------------------------------------- /catalogs/data/README.md: -------------------------------------------------------------------------------- 1 | # Data Catalogs 2 | 3 | > The following files contain interesting values \(data\) for testing input and output fields. 4 | 5 | "Input fields are crazy things. Let's exploit them together." 6 | 7 | ## Domain Testing Tables 8 | 9 | When testing input fields it is often useful to list the variables you are interested in exploring in one of two types of tables: 10 | 11 | 1. [Classic Equivalence Class table](https://www.dropbox.com/s/eeboxpg00qnocof/Classical%20Boundary%3AEquivalence%20Table%20Template.xltx?dl=0) 12 | 2. [Risk Equivalence Class table](https://www.dropbox.com/s/mbyvz8yot4jf37b/Risk%20%3A%20Equivalence%20Table%20Template.xltx?dl=0) 13 | 14 | ## Other Catalogs: 15 | 16 | * [Chris's Images Catalog](https://github.com/ckenst/images_catalog) 17 | * [Falsehoods Programmers Believe About Names](https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/) 18 | * [Falsehoods programmers believe about time](https://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time) 19 | * [More falsehoods programmers believe about time](https://infiniteundo.com/post/25509354022/more-falsehoods-programmers-believe-about-time) 20 | 21 | ### Credits 22 | 23 | The input catalog information came from a combination of information from: 24 | 25 | * [Lessons Learned in Software Testing: A Context-Driven Approach](http://www.amazon.com/Lessons-Learned-Software-Testing-Context-Driven-ebook/dp/B000S1LVBS/) \(p. 45\). John Wiley and Sons. Kindle Edition. Kaner, Cem; Bach, James; Pettichord, Bret \(2008-04-21\). 26 | * [The Domain Testing Workbook](https://www.amazon.com/dp/B00GU2QEV6/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1). Kaner, Cem; Hoffman, Douglas; Padmanabhan, Sowmya. 27 | 28 | -------------------------------------------------------------------------------- /catalogs/data/addresses.md: -------------------------------------------------------------------------------- 1 | # Addresses 2 | 3 | > What are some interesting input tests for address validation? 4 | 5 | ## Kinds of Addresses 6 | - Mailing or Shipping Address 7 | - Military Address 8 | - Post Office Box (PO Box) 9 | - Private Mailbox (PMB) 10 | - Physical Address 11 | 12 | ### Not all mailing addresses are shipping addresses 13 | 14 | Physical Address: 15 | 25777 Co Rd 103 16 | Jelm, CO 82063 17 | 18 | USPS Mailing Address: 19 | 25777 Co Rd 103 20 | Jelm, WY 82063-9203 21 | 22 | ### Missing Street 23 | 24 | Correct Address: 1200 Park Ave, Emeryville, CA 94608 25 | Wrong Address: 1200 Park, Emeryville, CA 94608 26 | 27 | ### Wrong City 28 | 29 | Correct Address: 1200 Park Ave, Emeryville, CA 94608 30 | Wrong Address: 1200 Park Ave, Oakland, CA 94608 31 | 32 | Correct Address: 2 Kirkland St, Cambridge MA 02138 33 | Wrong Address: 2 Kirkland St, Boston MA 02138 34 | 35 | 36 | ### Wrong State 37 | 38 | ### Wrong Country 39 | 40 | 41 | ## Test Ideas 42 | 43 | - Start with well known addresses that don't change 44 | - Hospitals 45 | - Schools / Universities 46 | - Cemeteries 47 | - Government Locations 48 | - Landmarks 49 | - -------------------------------------------------------------------------------- /catalogs/data/dates.md: -------------------------------------------------------------------------------- 1 | # Date fields 2 | -------------------------------------------------------------------------------- /catalogs/data/fixed_point.md: -------------------------------------------------------------------------------- 1 | # Fixed Point 2 | 3 | > What are the interesting input tests for a simple floating point field? 4 | 5 | ## Fixed Point 6 | 7 | * Nothing 8 | * Empty field \(clear the default value\) 9 | * Whitespace only \(tab, space\) 10 | * 0 11 | * Valid value 12 | * At lower bound \(LB\) of range 13 | * At lower bound \(LB\) of range − delta 14 | * At upper bound \(UB\) of range 15 | * At upper bound \(UB\) of range + delta 16 | 17 | -------------------------------------------------------------------------------- /catalogs/data/floating_point.md: -------------------------------------------------------------------------------- 1 | # Floating Point 2 | 3 | > What are the interesting input tests for a simple floating point field? 4 | 5 | ## Floating Point 6 | 7 | * Nothing 8 | * Empty field \(clear the default value\) 9 | * Whitespace only \(tab, space\) 10 | * 0 11 | * Valid value 12 | * At lower bound \(LB\) of range 13 | * At lower bound \(LB\) of range − delta 14 | * At upper bound \(UB\) of range 15 | * At upper bound \(UB\) of range + delta 16 | 17 | **References** 18 | 19 | * [https://en.wikipedia.org/wiki/Floating-point\_arithmetic](https://en.wikipedia.org/wiki/Floating-point_arithmetic) 20 | 21 | -------------------------------------------------------------------------------- /catalogs/data/integers.md: -------------------------------------------------------------------------------- 1 | # Integers 2 | 3 | > What are the interesting input tests for a simple integer field? 4 | 5 | ## Integers 6 | 7 | * Nothing 8 | * Empty field \(clear the default value\) 9 | * Whitespace only \(tab, space\) 10 | * 0 11 | * Valid value 12 | * At lower bound \(LB\) of range 13 | * At lower bound \(LB\) of range − 1 14 | * At upper bound \(UB\) of range 15 | * At upper bound \(UB\) of range + 1 16 | * Far below the LB of range 17 | * Far above the UB of range 18 | * At LB number of digits or characters 19 | * At LB − 1 number of digits or characters 20 | * At UB number of digits or characters 21 | * At UB + 1 number of digits or characters 22 | * Far more than UB number of digits or characters 23 | * Negative Nondigits, especially / \(ASCII character 47\) and : \(ASCII character 58\) 24 | * Wrong data type \(e. g., decimal into integer\) 25 | * Expressions Leading space 26 | * Many leading spaces Leading zero 27 | * Many leading zeros Leading + sign 28 | * Many leading + signs 29 | * Nonprinting character \(e. g., Ctrl+char\) 30 | * Operating system filename reserved characters \(e. g., “ \* . :”\) 31 | * Language reserved characters 32 | * Upper ASCII \(128-254\) \(a.k.a. ANSI\) characters 33 | * ASCII 255 \(often interpreted as end of file\) 34 | * Uppercase characters Lowercase characters 35 | * Modifiers \(e. g., Ctrl, Alt, Shift-Ctrl, and so on\) 36 | * Function key \(F2, F3, F4, and so on\) 37 | * Enter nothing but wait for a long time before pressing the 38 | * Enter or Tab key, clicking OK, or doing something equivalent that takes you out of the field. Is there a time-out? What is the effect? 39 | * Enter one digit but wait for a long time before entering another digit or digits and then press the Enter key. How long do you have to wait before the system times you out, if it does? What happens to the data you entered? What happens to other data you previously entered? 40 | * Enter digits and edit them using the backspace key, and delete them, and use arrow keys \(or the mouse\) to move you into the digits you’ve already entered so that you can insert or overtype new digits. 41 | * Enter digits while the system is reacting to interrupts of different kinds \(such as printer activity, clock events, mouse movement and clicks, files going to disk, and so on\). 42 | * Enter a digit, shift focus to another application, return to this application. Where is the focus? 43 | 44 | -------------------------------------------------------------------------------- /catalogs/data/money.md: -------------------------------------------------------------------------------- 1 | # Money Fields 2 | 3 | validation, error messages, accepted input, etc. 4 | -------------------------------------------------------------------------------- /catalogs/data/phones.md: -------------------------------------------------------------------------------- 1 | # Phone Numbers 2 | 3 | > What are some interesting input tests for phone numbers? 4 | 5 | ## US Phone Numbers 6 | 7 | * [Fictitious Telephone Numbers](https://en.wikipedia.org/wiki/Fictitious_telephone_number) 8 | * [555 Phone Number](https://en.wikipedia.org/wiki/555_(telephone_number)) -------------------------------------------------------------------------------- /catalogs/data/strings.md: -------------------------------------------------------------------------------- 1 | # Strings 2 | 3 | > What are some interesting input tests for a simple string field? 4 | 5 | ## Strings 6 | 7 | * Nothing 8 | * Empty field \(clear the default value\) 9 | * Null character \(ASCII 0\) 10 | * Whitespace only \(tab, space, CR \(ASCII 13\), ASCII 127\) 11 | * ` ` 12 | * More available: https://en.wikipedia.org/wiki/Whitespace_character 13 | * Hyphenated last names 14 | * Rainbolt-greene 15 | * Strings + Unicode 16 | * See emjois below 17 | * Unicode like Emjois 18 | * Names like Chris 💣 (bomb) 19 | * [Awesome Unicode](https://github.com/Wisdom/Awesome-Unicode) 20 | * [Big List of Naughty Strings](https://github.com/minimaxir/big-list-of-naughty-strings) 21 | 22 | 23 | ## Uses 24 | 25 | * eCommerce fields during checkout 26 | * Forms -------------------------------------------------------------------------------- /catalogs/data/zipcodes.md: -------------------------------------------------------------------------------- 1 | # Zip + Post codes 2 | 3 | > What are some interesting input tests for zipcodes? 4 | 5 | ## US Zipcodes 6 | 7 | - 00501 8 | - Lowest ZIP 9 | - Holtsville, NY 10 | - Also detects fields that might truncate leading zeros 11 | - 06390 12 | - NY ZIP that Belongs in Connecticut? 13 | - Fishers Island, NY. 14 | - 09603 15 | - Armed Forces Europe, City is APO, State is AE 16 | - 12345 17 | - General Electric’s Unique ZIP 18 | - Schenectady, NY. 19 | - 20252 20 | - Smokey Bear’s Personal ZIP 21 | - Washington, DC. 22 | - 22313 23 | - P.O. box only ZIP 24 | - Alexandria, Virginia. 25 | - 99530-9998 26 | - ZIP for Santa Claus in the north pole. 27 | - Anchorage, AK. 28 | - 99950 29 | - Highest ZIP 30 | - Ketchikan, AK 31 | 32 | ## Non-US Postal Codes 33 | 34 | - H0H 0H0 35 | - Post Code for North Pole 36 | - North Pole, Canada 37 | - LL61 5QH 38 | - Post Code for Llanfair­pwllgwyngyll­gogery­chwyrn­drobwll­llan­tysilio­gogo­goch 39 | - Sometimes abbreviated to Llanfairpwllgwyngyll 40 | - Wales, UK 41 | - 10200 42 | - Krung Thep Maha Nakhon, Thailand 43 | - SW1A 2AA 44 | - Downing Street in London, England 45 | 46 | ## Zipcode Formats 47 | 48 | - 5 digits \(ZIP\) 49 | - 99530 50 | - 9 digits \(ZIP + 4\) 51 | - 995309998 52 | - 10 digits \(ZIP + "-" + 4\) 53 | - 99530-9998 54 | -------------------------------------------------------------------------------- /catalogs/error_catalogs.md: -------------------------------------------------------------------------------- 1 | # Error Catalogs 2 | 3 | You can use existing error catalogs or develop your own. 4 | 5 | To use an existing error catalog: 6 | 7 | 1. Find a defect in the list 8 | 2. Ask whether the software under test could have this defect 9 | 3. If it's theoritically possible the program could have the defect, ask how you could find the bug 10 | 4. Ask how plausible it is that the bug could be in the program and then how serious the failure would be if it was there 11 | 5. If appropropriate design a test or series of tests for bugs of this type 12 | 13 | ## Software Error Catalogs: 14 | 15 | * [Common Software Errors from TCS, 1993](http://www.testingeducation.org/BBST/testdesign/Kaner_Common_Software_Errors.pdf) 16 | 17 | 18 | ## Security Catalogs: 19 | 20 | * [National Vulnerability Database](https://nvd.nist.gov/) 21 | * [Common Weakness Enumeration](https://nvd.nist.gov/vuln/categories) -------------------------------------------------------------------------------- /exploration/README.md: -------------------------------------------------------------------------------- 1 | # Exploratory 2 | 3 | **FCC CUTS VIDS** 4 | 5 | A test touring heuristic. 6 | 7 | * Feature tour: Move through the application and get familiar with all the controls and features you come across. 8 | * Complexity tour: Find the five most complex things about the application. 9 | * Claims tour: Find all the information in the product that tells you what the product does. 10 | * Configuration tour: Attempt to find all the ways you can change settings in the product in a way that the application retains those settings. 11 | * User tour: Imagine five users for the product and the information they would want from the product or the major features they would be interested in. 12 | * Testability tour: Find all the features you can use as testability features and/or identify tools you have available that you can use to help in your testing. 13 | * Scenario tour: Imagine five realistic scenarios for how the users identified in the user tour would use this product. 14 | * Variability tour: Look for things you can change in the application - and then you try to change them. 15 | * Interoperability tour: What does this application interact with? 16 | * Data tour: Identify the major data elements of the application. 17 | * Structure tour: Find everything you can about what comprises the physical product \(code, interfaces, hardware, files, etc…\). 18 | * [http://michaeldkelly.com/blog/2005/9/20/touring-heuristic.html](http://michaeldkelly.com/blog/2005/9/20/touring-heuristic.html) 19 | 20 | -------------------------------------------------------------------------------- /exploration/writing_exploratory_charters.md: -------------------------------------------------------------------------------- 1 | # Writing Exploratory Charters 2 | 3 | What is exploratory testing? 4 | 5 | > Simultaneous learning, test design and test execution. \(Bach\) 6 | > 7 | > A style \(approach\) of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project. \(Kaner\) 8 | 9 | ## How to Write Exploratory Charters 10 | 11 | This is based on "A simple charter template" from Elisabeth Hendrickson's awesome book Explore It!, Chapter 2, page 67 of 502 \(ebook\). 12 | 13 | Explore \(target\) With \(resources\) To discover \(information\) 14 | 15 | * Target: What are you exploring? It could be a feature, a requirement, or a module. 16 | * Resources: What resources will you bring with you? Resources can be anything: a tool, a data set, a technique, a configuration, or perhaps an interdependent feature. 17 | * Information: What kind of information are you hoping to find? Are you characterizing the security, performance, reliability, capability, usability or some other aspect of the system? Are you looking for consistency of design or violations of a standard? 18 | 19 | ## Examples of Charters 20 | 21 | Some of these follow the above format, some don't. As we learn to write charters, the format can differ. Many thanks to Bach's Rapid Software Testing Appendices. 22 | 23 | * Explore editing profiles with injection attacks to discover security vulnerabilities 24 | * Explore editing profiles with various login methods to discover surprises 25 | * Explore input fields with JavaScript and SQL injection attacks to discover security vulnerabilities 26 | * Explore catalog features with 10x \# products to discover problems with browsing and searching 27 | * Explore and analyze the product elements of . Produce a test coverage outline. 28 | * Explore large pools of executors running at the same time to discover bad behavior related to race conditions or flaws in claiming runs of reasonably large values. 29 | * Identify and test all claims in the manual. \(either use checkmark/X/? notation on the printed manual, or list each tested claim in your notes\) 30 | * Define work flows through DecideRight and try each one. The flows should represent realistic scenarios of use, and they should collectively encompass each primary function of the product. 31 | * We need to understand the performance and reliability characteristics of DecideRight as decision complexity increased. Start with a nominal scenario and scale it up in terms of number of options and factors until the application appears to hang, crash, or gracefully prevent user from enlarging any further. 32 | * Test all fields that allow data entry \(you know the drill: function, stress, and limits, please\) 33 | * Analyze the file format of a DecideRight scenario and determine the behavior of the application when its elements are programmatically manipulated. Test for error handling and performance when coping with pathological scenario files. 34 | * Check UI against Windows interface standards. 35 | * Is there any way to corrupt a scenario file? How would we know it’s corrupted? Investigate the feasibility of writing an automatic file checker. Find out if the developers have already done so. 36 | * Test integration with external applications, especially Microsoft Word. 37 | * Determine the decision analysis algorithm by experimentation and reproduce it in Excel. Then, use that spreadsheet to test DecideRight with complex decision scenarios. 38 | * Run DecideRight under AppVerifier and report any errors. 39 | 40 | ## Time-Box Sessions 41 | 42 | Exploring can be an open-ended endeavor. To help guide those explorations you can use time boxed sessions for structuring and organizing your effort. 43 | 44 | Refer to session-based test management \(SBTM\). 45 | 46 | -------------------------------------------------------------------------------- /folklore.md: -------------------------------------------------------------------------------- 1 | # Testing Folklore 2 | 3 | * Every test must have an expected, predicted result 4 | * Effective testing requires complete, clear, consistent, and unambiguous specifications. 5 | * Bugs found earlier cost less to fix than bugs found later. – I might still believe this – only for those bugs that you could reasonably expect to find 6 | * Testers are the quality gatekeepers for a product 7 | * Repeated tests are fundamentally more valuable. 8 | * You can’t manage what you can’t measure. 9 | * Testing at boundary values is the best way to find bugs. 10 | * Test Documentation is needed to deflect legal liability. 11 | * The more bugs testers find before release, the better the testing effort. 12 | * Rigorous planning is essential for good testing. 13 | * Exploratory testing is unstructured testing, and is therefore un-realiable. 14 | * Adopting best practices will guarantee that we do a good job of testing. 15 | * Step by step instructions are necessary to make testing a repeatable process. 16 | 17 | -------------------------------------------------------------------------------- /heuristics/bugreporting.md: -------------------------------------------------------------------------------- 1 | # Bug Reporting 2 | 3 | **RIMGEN** 4 | 5 | A bug reporting heuristic for Bug Advocacy. 6 | 7 | * Replicate. 8 | * Isolate. 9 | * Maximize. 10 | * Generalize. 11 | * Externalize 12 | * Neutral Tone. \(Formerly 'And Say it Clearly and Dispassionately'\) 13 | 14 | **REACT** 15 | 16 | A bug reporting heuristic. 17 | 18 | * Reproduce: 19 | * Explore: 20 | * Analyze: 21 | * Communicate: 22 | * Triage 23 | * [http://www.brendanconnolly.net/react-to-bugs/](http://www.brendanconnolly.net/react-to-bugs/) 24 | 25 | -------------------------------------------------------------------------------- /heuristics/mobile.md: -------------------------------------------------------------------------------- 1 | # Mobile 2 | 3 | The following might help with testing mobile devices. 4 | 5 | * Battery 6 | * Standby 7 | * Accessibility Testing 8 | * Interrupts 9 | * Permissions 10 | * Gestures 11 | * Beta Testing 12 | * Security 13 | * Sensors 14 | * Mobile Networks 15 | 16 | ## Guides: 17 | 18 | * [Knott's Mobile Testing Cheat Sheet](https://adventuresinqa.com/wp-content/uploads/2015/12/Mobile-Testing-Cheat-Sheet-Adventures-in-QA.pdf) 19 | * [Android User Interface Guidelines](https://developer.android.com/guide/practices/ui_guidelines/index.html) 20 | * [iOS Human Interface Guidelines](https://developer.apple.com/ios/human-interface-guidelines/overview/themes/) 21 | 22 | ## MNEMONICs 23 | 24 | * [I SLICED UP FUN](http://www.kohl.ca/articles/ISLICEDUPFUN.pdf) 25 | * [MOBILE APP TESTING](https://dojo.ministryoftesting.com/dojo/lessons/mobile-app-testing-mnemonic) 26 | 27 | -------------------------------------------------------------------------------- /heuristics/reporting.md: -------------------------------------------------------------------------------- 1 | # Reporting 2 | 3 | **MCOASTER** 4 | 5 | A test reporting heuristic. 6 | 7 | * Mission 8 | * Coverage 9 | * Obstacles 10 | * Audience 11 | * Status 12 | * Techniques 13 | * Environment 14 | * Risk 15 | * [http://michaeldkelly.com/blog/2005/8/19/coming-up-with-a-heuristic.html](http://michaeldkelly.com/blog/2005/8/19/coming-up-with-a-heuristic.html) 16 | 17 | -------------------------------------------------------------------------------- /heuristics/stopping_rules.md: -------------------------------------------------------------------------------- 1 | # Stopping Rules 2 | 3 | Some ideas or rules on when we might stop testing. 4 | 5 | 1. Time has run out. This, for many testers, is the most common one: we stop testing when the time allocated for testing has expired. 6 | 2. The Piñata Heuristic. We stop whacking the program when the candy starts falling out—we stop the test when we see the first sufficiently dramatic problem. \(Bolton\) 7 | 3. The Dead Horse Heuristic. The program is too buggy to make further testing worthwhile. We know that things are going to be modified so much that any more testing will be invalidated by the changes. \(Bolton\) 8 | 4. Mission is accomplished. We stop testing when we have answered all of the questions that we set out to answer. 9 | 5. Mission has been revoked. 10 | 6. The I Feel Stuck! Heuristic. For whatever reason, we stop because we perceive there’s something blocking us. We don’t have the information we need \(many people claim that they can’t test without sufficient specifications, for example\). There’s a blocking bug, such that we can’t get to the area of the product that we want to test; we don’t have the equipment or tools we need; we don’t have the expertise on the team to perform some kind of specialized test. \(Bolton\) 11 | 7. The Pause That Refreshes Heuristic. Instead of stopping testing, we suspend it for a while. We might stop testing and take a break when we’re tired, or bored, or uninspired to test. We might pause to do some research, to do some planning, to reflect on what we’ve done so far, the better to figure out what to do next. The idea here is that we need a break of some kind, and can return to the product later with fresh eyes or fresh minds. There’s another kind of pause, too: We might stop testing some feature because another has higher priority for the moment.\(Bolton\) 12 | 8. The Flatline Heuristic. No matter what we do, we’re getting the same result. This can happen when the program has crashed or has become unresponsive in some way, but we might get flatline results when the program is especially stable, too—”looks good to me!” \(Bolton\) 13 | 9. The Customary Conclusion Heuristic. We stop testing when we usually stop testing. There’s a protocol in place for a certain number of test ideas, or test cases, or test cycles or variation, such that there’s a certain amount of testing work that we do, and we stop when that’s done. Agile teams \(say that they\) often implement this approach: “When all the acceptance tests pass, then we know we’re ready to ship.” Ewald Roodenrijs gives an example of this heuristic in his blog post titled When Does Testing Stop? He says he stops “when a certain amount of test cycles has been executed including the regression test”. This differs from “Time’s Up”, in that the time dimension might be more elastic than some other dimension. Since many projects seem to be dominated by the schedule, it took a while for James and me to realize that this one is in fact very common. We sometimes hear “one test per requirement” or “one positive test and one negative test per requirement” as a convention for establishing good-enough testing. \(We don’t agree with it, of course, but we hear about it.\) \(Bolton\) 14 | 10. No more interesting questions. At this point, we’ve decided that no questions have answers sufficiently valuable to justify the cost of continuing to test, so we’re done. This heuristic tends to inform the others, in the sense that if a question or a risk is sufficiently compelling, we’ll continue to test rather than stopping. \(Bolton\) 15 | 11. The Avoidance/Indifference Heuristic. Sometimes people don’t care about more information, or don’t want to know what’s going on the in the program. The application under test might be a first cut that we know will be replaced soon. Some people decide to stop testing because they’re lazy, malicious, or unmotivated. Sometimes the business reasons for releasing are so compelling that no problem that we can imagine would stop shipment, so no new test result would matter. \(Bolton\) 16 | 12. Mission Rejected. We stop testing when we perceive a problem for some person—in particular, an ethical issue—that prevents us from continuing work on a given test, test cycle, or development project. \(Bolton, Kaner\) 17 | 18 | ## Credits 19 | 20 | * Most of these rules come from Michael Bolton's article [When Do We Stop Testing?](http://www.developsense.com/blog/2009/09/when-do-we-stop-test/)\_ 21 | 22 | -------------------------------------------------------------------------------- /heuristics/strategy.md: -------------------------------------------------------------------------------- 1 | # Test Strategy 2 | 3 | **SFDIPOT** \(San Francisco Depot\) 4 | 5 | A test strategy heuristic. 6 | 7 | * Structure 8 | * Function 9 | * Data 10 | * Integrations 11 | * Platform 12 | * Operations 13 | * Time 14 | * [Bach's Heuristic Test Strategy Model](http://www.satisfice.com/tools/htsm.pdf) 15 | 16 | -------------------------------------------------------------------------------- /htsm.md: -------------------------------------------------------------------------------- 1 | # Heuristic Test Strategy Model 2 | 3 | The Heuristic Test Strategy Model is a set of patterns for designing a test strategy. The immediate purpose of this model is to remind testers of what to think about when they are creating tests. Ultimately, it is intended to be customized and used to facilitate dialog and direct self-learning among professional testers. 4 | 5 | 6 | 7 | Project Environment includes resources, constraints, and other elements in the project that may enable or hobble our testing. Sometimes a tester must challenge constraints, and sometimes accept them. 8 | 9 | 10 | 11 | Product Elements are things that you intend to test. Software is complex and invisible. Take care to cover all of it that matters, not just the parts that are easy to see. 12 | 13 | 14 | 15 | Quality Criteria are the rules, values, and sources that allow you as a tester to determine if the product has problems. Quality criteria are multidimensional and often hidden or self-contradictory. 16 | 17 | 18 | 19 | Test Techniques are heuristics for creating tests. All techniques involve some sort of analysis of project environment, product elements, and quality criteria. 20 | 21 | 22 | 23 | Perceived Quality is the result of testing. You can never know the "actual" quality of a software product, but through the application of a variety of tests, you can make an informed assessment of it. 24 | 25 | ## Product Elements 26 | 27 | Ultimately a product is an experience or solution provided to a customer. Products have many dimensions. So, to test well, we must examine those dimensions. Each category, listed below, represents an important and unique aspect of a product. Testers who focus on only a few of these are likely to miss important bugs. 28 | 29 | ### Structure 30 | 31 | Everything that comprises the physical product. 32 | 33 | - Code: the code structures that comprise the product, from executables to individual routines. 34 | - Hardware: any hardware component that is integral to the product. 35 | - Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files. 36 | - Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content, packaging, license agreements, etc. 37 | 38 | ### Functions 39 | 40 | Everything that the product does. 41 | 42 | - Application: any function that defines or distinguishes the product or fulfills core requirements. 43 | - Calculation: any arithmetic function or arithmetic operations embedded in other functions. 44 | - Time-related: time-out settings; daily or month-end reports; nightly batch jobs; time zones; business holidays; interest calculations; terms and warranty periods; chronograph functions. 45 | - Transformations: functions that modify or transform something (e.g. setting fonts, inserting clip art, withdrawing money from account). 46 | - Startup/Shutdown: each method and interface for invocation and initialization as well as exiting the product. 47 | - Multimedia: sounds, bitmaps, videos, or any graphical display embedded in the product. 48 | - Error Handling: any functions that detect and recover from errors, including all error messages. 49 | - Interactions: any interactions between functions within the product. 50 | - Testability: any functions provided to help test the product, such as diagnostics, log files, asserts, test menus, etc. 51 | 52 | ### Data 53 | 54 | Everything that the product processes. 55 | 56 | - Input: any data that is processed by the product. 57 | - Output: any data that results from processing by the product. 58 | - Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc. 59 | - Persistent: any data that is stored internally and expected to persist over multiple operations. This includes modes or states of the product, such as options settings, view modes, contents of documents, etc. 60 | - Sequences/Combinations: any ordering or permutation of data, e.g. word order, sorted vs. unsorted data, order of tests. 61 | - Cardinality: Numbers of objects or fields may vary (e.g. zero, one, many, max, open limit). Some may have to be unique (e.g. database keys). 62 | - Big/Little: variations in the size and aggregation of data. 63 | - Noise: any data or state that is invalid, corrupted, or produced in an uncontrolled or incorrect fashion. 64 | - Lifecycle: transformations over the lifetime of a data entity as it is created, accessed, modified, and deleted. 65 | 66 | ### Interfaces 67 | 68 | Every conduit by which the product is accessed or expressed. 69 | 70 | - User Interfaces: any element that mediates the exchange of data with the user (e.g. navigation, display, data entry). 71 | - System Interfaces: any interface with something other than a user, such as other programs, hard disk, network, etc. 72 | - API/SDK: Any programmatic interfaces or tools intended to allow the development of new applications using this product. 73 | - Import/export: any functions that package data for use by a different product, or interpret data from a different product. 74 | 75 | ### Platform 76 | 77 | Everything on which the product depends (and that is outside your project). 78 | 79 | - External Hardware: hardware components and configurations that are not part of the shipping product, but are required (or 80 | - optional) in order for the product to work: systems, servers, memory, keyboards, the Cloud. 81 | - External Software: software components and configurations that are not a part of the shipping product, but are required (or 82 | - optional) in order for the product to work: operating systems, concurrently executing applications, drivers, fonts, etc. 83 | - Internal Components: libraries and other components that are embedded in your product but are produced outside your project. 84 | 85 | ### Operations 86 | 87 | How the product will be used. 88 | 89 | - Users: the attributes of the various kinds of users. 90 | - Environment: the physical environment in which the product operates, including such elements as noise, light, and distractions. 91 | - Common Use: patterns and sequences of input that the product will typically encounter. This varies by user. 92 | - Disfavored Use: patterns of input produced by ignorant, mistaken, careless or malicious use. 93 | - Extreme Use: challenging patterns and sequences of input that are consistent with the intended use of the product. 94 | 95 | ### Time 96 | 97 | Any relationship between the product and time. 98 | 99 | - Input/Output: when input is provided, when output created, and any timing relationships (delays, intervals, etc.) among them. 100 | - Fast/Slow: testing with “fast” or “slow” input; fastest and slowest; combinations of fast and slow. 101 | - Changing Rates: speeding up and slowing down (spikes, bursts, hangs, bottlenecks, interruptions). 102 | - Concurrency: more than one thing happening at once (multi-user, time-sharing, threads, and semaphores, shared data). 103 | 104 | ## Top Test Techniques 105 | 106 | A test technique is a heuristic for creating tests. There are many interesting techniques. The list includes nine general techniques. By “general technique” we mean that the technique is simple and universal enough to apply to a wide variety of contexts. Many specific techniques are based on one or more of these nine. And an endless variety of specific test techniques may be constructed by combining one or more general techniques with coverage ideas from the other lists in this model. 107 | 108 | ### Function Testing 109 | 110 | Test what it can do 111 | 112 | 1. Identify things that the product can do (functions and sub-functions). 113 | 2. Determine how you’d know if a function was capable of working. 114 | 3. Test each function, one at a time. 115 | 4. See that each function does what it’s supposed to do and not what it isn’t supposed to do. 116 | 117 | ### Domain Testing 118 | 119 | Divide and conquer the data 120 | 121 | 1. Look for any data processed by the product. Look at outputs as well as inputs. 122 | 2. Decide which particular data to test with. Consider things like boundary values, typical values, convenient values, invalid values, or best representatives. 123 | 3. Consider combinations of data worth testing together. 124 | 125 | ### Stress Testing 126 | 127 | Overwhelm the product 128 | 129 | 1. Look for sub-systems and functions that are vulnerable to being overloaded or “broken” in the presence of challenging data or constrained resources. 130 | 2. Identify data and resources related to those sub-systems and functions. 131 | 3. Select or generate challenging data, or resource constraint conditions to test with: e.g., large or complex data structures, high loads, long test runs, many test cases, low memory conditions. 132 | 133 | ### Flow Testing 134 | 135 | Do one thing after another 136 | 137 | 1. Perform multiple activities connected end-to-end; for instance, conduct tours through a state model. 138 | 2. Don’t reset the system between actions. 139 | 3. Vary timing and sequencing, and try parallel threads. 140 | 141 | ### Scenario Testing 142 | 143 | Test to a compelling story 144 | 145 | 1. Begin by thinking about everything going on around the product. 146 | 2. Design tests that involve meaningful and complex interactions with the product. 147 | 3. A good scenario test is a compelling story of how someone who matters might do something that matters with the product. 148 | 149 | ### Claims Testing 150 | 151 | Verify every claim 152 | 153 | 1. Identify reference materials that include claims about the product (implicit or explicit). Consider SLAs, EULAs, advertisements, specifications, help text, manuals, etc. 154 | 2. Analyze individual claims, and clarify vague claims. 155 | 3. Verify that each claim about the product is true. 156 | 4. If you’re testing from an explicit specification, expect it and the product to be brought into alignment. 157 | 158 | ### User Testing 159 | 160 | Involve the users 161 | 162 | 1. Identify categories and roles of users. 163 | 2. Determine what each category of user will do (use cases), how they will do it, and what they value. 164 | 3. Get real user data, or bring real users in to test. 165 | 4. Otherwise, systematically simulate a user (be careful—it’s easy to think you’re like a user even when you’re not). 166 | 5. Powerful user testing is that which involves a variety of users and user roles, not just one. 167 | 168 | ### Risk Testing 169 | 170 | Imagine a problem, then look for it. 171 | 172 | 1. What kinds of problems could the product have? 173 | 2. Which kinds matter most? Focus on those. 174 | 3. How would you detect them if they were there? 175 | 4. Make a list of interesting problems and design tests specifically to reveal them. 176 | 5. It may help to consult experts, design documentation, past bug reports, or apply risk heuristics. 177 | 178 | ### Automatic Checking 179 | 180 | Check a million different facts 181 | 182 | 1. Look for or develop tools that can perform a lot of actions and check a lot things. 183 | 2. Consider tools that partially automate test coverage. 184 | 3. Consider tools that partially automate oracles. 185 | 4. Consider automatic change detectors. 186 | 5. Consider automatic test data generators. 187 | 6. Consider tools that make human testing more powerful. 188 | 189 | ## Quality Criteria 190 | 191 | A quality criterion is some requirement that defines what the product should be. By thinking about different kinds of criteria, you will be better able to plan tests that discover important problems fast. Each of the items on this list can be thought of as a potential risk area. For each item below, determine if it 192 | 193 | ### Capability 194 | 195 | Can it perform the required functions? 196 | 197 | ### Reliability 198 | 199 | Will it work well and resist failure in all required situations? 200 | 201 | - Robustness: the product continues to function over time without degradation, under reasonable conditions. 202 | - Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily. 203 | - Data Integrity: the data in the system is protected from loss or corruption. 204 | - Safety: the product will not fail in such a way as to harm life or property. 205 | 206 | ### Usability 207 | 208 | How easy is it for a real user to use the product? 209 | 210 | - Learnability: the operation of the product can be rapidly mastered by the intended user. 211 | - Operability: the product can be operated with minimum effort and fuss. 212 | - Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features. 213 | 214 | ### Charisma 215 | 216 | How appealing is the product? 217 | 218 | - Aesthetics: the product appeals to the senses. 219 | - Uniqueness: the product is new or special in some way. 220 | - Necessity: the product possesses the capabilities that users expect from it. 221 | - Usefulness: the product solves a problem that matters, and solves it well. 222 | - Entrancement: users get hooked, have fun, are fully engaged when using the product. 223 | - Image: the product projects the desired impression of quality. 224 | 225 | ### Security 226 | 227 | How well is the product protected against unauthorized use or intrusion? 228 | 229 | - Authentication: the ways in which the system verifies that a user is who he says he is. 230 | - Authorization: the rights that are granted to authenticated users at varying privilege levels. 231 | - Privacy: the ways in which customer or employee data is protected from unauthorized people. 232 | - Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities) 233 | 234 | ### Scalability 235 | 236 | How well does the deployment of the product scale up or down? 237 | 238 | ### Compatibility 239 | 240 | How well does it work with external components & configurations? 241 | 242 | - Application Compatibility: the product works in conjunction with other software products. 243 | - Operating System Compatibility: the product works with a particular operating system. 244 | - Hardware Compatibility: the product works with particular hardware components and configurations. 245 | - Backward Compatibility: the products works with earlier versions of itself. 246 | - Resource Usage: the product doesn’t unnecessarily hog memory, storage, or other system resources. 247 | 248 | ### Performance 249 | 250 | How speedy and responsive is it? 251 | 252 | ### Installability 253 | 254 | How easily can it be installed onto its target platform(s)? 255 | 256 | - System requirements: Does the product recognize if some necessary component is missing or insufficient? 257 | - Configuration: What parts of the system are affected by installation Where are files and resources stored? 258 | - Uninstallation: When the product is uninstalled, is it removed cleanly? 259 | - Upgrades/patches: Can new modules or versions be added easily Do they respect the existing configuration? 260 | - Administration: Is installation a process that is handled by special personnel, or on a special schedule? 261 | 262 | ### Development 263 | 264 | How well can we create, test, and modify it? 265 | 266 | - Supportability: How economical will it be to provide support to users of the product? 267 | - Testability: How effectively can the product be tested? 268 | - Maintainability: How economical is it to build, fix or enhance the product? 269 | - Portability: How economical will it be to port or reuse the technology elsewhere? 270 | - Localizability: How economical will it be to adapt the product for other places? 271 | 272 | ## Project Environment 273 | 274 | Creating and executing tests is the heart of the test project. However, there are many factors in the project environment that are critical to your decision about what particular tests to create. In each category, below, consider how that factor may help or hinder your test design process. Try to exploit every resource. 275 | 276 | ### Mission 277 | 278 | Your purpose on this project, as understood by you and your customers. 279 | 280 | - Do you know who your customers are? Whose opinions matter? Who benefits or suffers from the work you do? 281 | - Do you know what your customers expect of you on this project? Do you agree? 282 | - Maybe your customers have strong ideas about what tests you should create and run. 283 | - Maybe they have conflicting expectations. You may have to help identify and resolve those. 284 | 285 | ### Information 286 | 287 | Information about the product or project that is needed for testing. 288 | 289 | - Whom can we consult with to learn about this project? 290 | - Are there any engineering documents available? User manuals? Web-based materials? Specs? User stories? 291 | - Does this product have a history? Old problems that were fixed or deferred? Pattern of customer complaints? 292 | - Is your information current? How are you apprised of new or changing information? 293 | - Are there any comparable products or projects from which we can glean important information? 294 | 295 | ### Developer Relations 296 | 297 | How you get along with the programmers. 298 | 299 | - Hubris: Does the development team seem overconfident about any aspect of the product? 300 | - Defensiveness: Is there any part of the product the developers seem strangely opposed to having tested? 301 | - Rapport: Have you developed a friendly working relationship with the programmers? 302 | - Feedback loop: Can you communicate quickly, on demand, with the programmers? 303 | - Feedback: What do the developers think of your test strategy? 304 | 305 | ### Test Team 306 | 307 | Anyone who will perform or support testing. 308 | 309 | - Do you know who will be testing? Do you have enough people? 310 | - Are there people not on the “test team” that might be able to help? People who’ve tested similar products before and might have advice? Writers? Users? Programmers? 311 | - Are there particular test techniques that the team has special skill or motivation to perform? 312 | - Is any training needed? Is any available? 313 | - Who is co-located and who is elsewhere? Will time zones be a problem? 314 | 315 | ### Equipment & Tools 316 | 317 | Hardware, software, or documents required to administer testing. 318 | 319 | - Hardware: Do we have all the equipment you need to execute the tests? Is it set up and ready to go? 320 | - Automation: Are any test tools needed? Are they available? 321 | - Probes: Are any tools needed to aid in the observation of the product under test? 322 | - Matrices & Checklists: Are any documents needed to track or record the progress of testing? 323 | 324 | ### Schedule 325 | 326 | The sequence, duration, and synchronization of project events 327 | 328 | - Test Design: How much time do you have? Are there tests better to create later than sooner? 329 | - Test Execution: When will tests be executed? Are some tests executed repeatedly, say, for regression purposes? 330 | - Development: When will builds be available for testing, features added, code frozen, etc.? 331 | - Documentation: When will the user documentation be available for review? 332 | 333 | ### Test Items 334 | 335 | The product to be tested. 336 | 337 | - Scope: What parts of the product are and are not within the scope of your testing responsibility? 338 | - Availability: Do you have the product to test? Do you have test platforms available? When do you get new builds? 339 | - Volatility: Is the product constantly changing? What will be the need for retesting? 340 | - New Stuff: What has recently been changed or added in the product? 341 | - Testability: Is the product functional and reliable enough that you can effectively test it? 342 | - Future Releases: What part of your tests, if any, must be designed to apply to future releases of the product? 343 | 344 | ### Deliverables 345 | 346 | The observable products of the test project. 347 | 348 | - Content: What sort of reports will you have to make? Will you share your working notes, or just the end results? 349 | - Purpose: Are your deliverables provided as part of the product? Does anyone else have to run your tests? 350 | - Standards: Is there a particular test documentation standard you’re supposed to follow? 351 | - Media: How will you record and communicate your reports? 352 | 353 | -------------------------------------------------------------------------------- /portfolio_examples.md: -------------------------------------------------------------------------------- 1 | # Examples of Testing Portfolios 2 | 3 | A portfolio is a collection of work that showcases your abilities. The purpose of a portfolio is to help people \(potential clients, generally\) to understand what you can do. Ultimately its purpose is to show that you are a competent professional. 4 | 5 | Examples: 6 | 7 | * [http://jessingrassellino.com/portfolio/](http://jessingrassellino.com/portfolio/) 8 | * [http://www.satisfice.com/rst-appendices.pdf](http://www.satisfice.com/rst-appendices.pdf) 9 | 10 | -------------------------------------------------------------------------------- /quicktests.md: -------------------------------------------------------------------------------- 1 | # Quicktests 2 | 3 | > A quick-test is an inexpensive test, optimized for a common type of software error, that requires little time or product-specific preparation or knowledge to perform. \(Kaner, BBST Slides\) 4 | > 5 | > A quick-test is a cheap test that has some value but requires little preparation, knowledge or time to perform. \(RST slides, page 104\) 6 | 7 | In other words, quicktests are a great way to /start/ testing a product. 8 | 9 | ## From BBST Slides: 10 | 11 | * User interface design errors -> Tour the user interface for things that are confusing, unappealing, time-wasting or inconsistent with relevant design norms. 12 | * Boundaries -> The program expects variables to stick within a range of permissible values. 13 | * Overflow -> These values are far too large or too small for the program to handle. 14 | * Calculations and operations -> Calculation involves evaluation of expressions, like 5*2. Some expressions evaluate to impossible results. Others can’t be evaluated because the operators are invalid for the type of data. 15 | * Initial states -> What value does a variable hold the first time you use it? 16 | * Modified values -> Set a variable to a value; then change it. This creates a risk if some other part of the program depends on this variable. 17 | * Control flow -> The control flow of a program describes what it will do next. A control flow error occurs when the program does the wrong next thing. 18 | * Sequences -> A program might pass a simple test but fail the same test embedded in a longer sequence. 19 | * Messages -> If the program communicates with an external server or system, corrupt the messages between them. 20 | * Timing and race conditions -> Timing failures can happen if the program needs access to a resource by a certain time, or must complete a task by a certain time. 21 | * Interface tests -> In interference testing, you do something to interfere with a task in progress. This might cause a timeout or a failed race condition. Or the program might lose data in transmission to/from an external system or device. 22 | * Error handling -> Errors in dealing with errors are among the most common bugs. 23 | * Failure handling -> After you find a bug, you can look for related bugs. 24 | * File system -> Read or write to files under conditions that should cause a failure. 25 | * Load and Stress -> Significant background activity eats resources and adds delays. This can yield failures that would not show up on a quiet system. 26 | * Configuration -> Check the application’s compatibility with different system configurations. 27 | * Multivariable relationships -> Any relationship between two variables is an opportunity for a relationship failure. 28 | 29 | ## From RST Slides: 30 | 31 | * Happy Path: Use the product in the most simple, expected, straightforward way, just as the most optimistic programmer might imagine users to behave. Perform a task, from start to finish, that an end-user might be expected to do. Look for anything that might confuse, delay, or irritate a reasonable person. 32 | * Documentation Tour: Look in the online help or user manual and find some instructions about how to perform some interesting activity. Do those actions. Improvise from them. If your product has a tutorial, follow it. You may expose a problem in the product or in the documentation; either way, you’ve found something useful. Even if you don’t expose a problem, you’ll still be learning about the product. 33 | * Sample Data Tour: Employ any sample data you can, and all that you can—the more complex or extreme the better. Use zeroes where large numbers are expected; use negative numbers where positive numbers are expected; use huge numbers where modestly-sized ones are expected; and use letters in every place that’s supposed to handle numbers. Change the units or formats in which data can be entered. Challenge the assumption that the programmers have thought to reject inappropriate data. 34 | * Variables Tour: Tour a product looking for anything that is variable and vary it. Vary it as far as possible, in every dimension possible. Identifying and exploring variations is part of the basic structure of my testing when I first encounter a product. 35 | * Complexity Tour: Tour a product looking for the most complex features and using challenging data sets. Look for nooks and crowds where bugs can hide. 36 | * File Tour: Have a look at the folder where the program's .EXE file is found. Check out the directory structure, including subs. Look for READMEs, help files, log files, installation scripts, .cfg, .ini, .rc files. Look at the names of .DLLs, and extrapolate on the functions that they might contain or the ways in which their absence might undermine the application. 37 | * Menus and Windows Tour: Tour a product looking for all the menus \(main and context menus\), menu items, windows, toolbars, icons, and other controls. 38 | * Keyboard and Mouse Tour: Tour a product looking for all the things you can do with a keyboard and mouse. Run through all of the keys on the keyboard. Hit all the F-keys. Hit Enter, Tab, Escape, Backspace. Run through the alphabet in order. Combine each key with Shift, Ctrl, and Alt. Also, click on everything. 39 | * Interruptions: Start activities and stop them in the middle. Stop them at awkward times. Perform stoppages using cancel buttons, O/S level interrupts \(ctrl-alt-delete or task manager\), arrange for other programs to interrupt \(such as screensavers or virus checkers\). Also try suspending an activity and returning later. 40 | * Undermining: Start using a function when the system is in an appropriate state, then change the state part way through \(for instance, delete a file while it is being edited, eject a disk, pull net cables or power cords\) to an inappropriate state. This is similar to interruption, except you are expecting the function to interrupt itself by detecting that it no longer can proceed safely. 41 | * Adjustments: Set some parameter to a certain value, then, at any later time, reset that value to something else without resetting or recreating the containing document or data structure. 42 | * Dog Piling: Get more processes going at once; more states existing concurrently. Nested dialog boxes and non-modal dialogs provide opportunities to do this. 43 | * Continuous Use: While testing, do not reset the system. Leave windows and files open. Let disk and memory usage mount. You're hoping that the system ties itself in knots over time. 44 | * Feature Interactions: Discover where individual functions interact or share data. Look for any interdependencies. Tour them. Stress them. I once crashed an app by loading up all the fields in a form to their maximums and then traversing to the report generator. Look for places where the program repeats itself or allows you to do the same thing in different places. 45 | * Click for Help: At some point, some users are going to try to bring up the context-sensitive help feature during some operation or activity. Does the product’s help file explain things in a useful way, or does it offend the user’s intelligence by simply restating what’s already on the screen? Is help even available at all? 46 | * Input Constraint Attack: Discover sources of input and attempt to violate constraints on that input. For instance, use a geometrically expanding string in a field. Keep doubling its length until the product crashes. Use special characters. Inject noise of any kind into a system and see what happens. Use Satisfice’s PerlClip utility to create strings of arbitrary length and content; use PerlClip’s counterstring feature to create a string that tells you its own length so that you can see where an application cuts off input. 47 | * Click Frenzy: Ever notice how a cat or a kid can crash a system with ease? Testing is more than "banging on the keyboard", but that phrase wasn't coined for nothing. Try banging on the keyboard. Try clicking everywhere. I broke into a touchscreen system once by poking every square centimeter of every screen until I found a secret button. 48 | * Shoe Test: This is any test consistent with placing a shoe on the keyboard. Basically, it means using auto-repeat on the keyboard for a very cheap stress test. Look for dialog boxes so constructed that pressing a key leads to, say, another dialog box \(perhaps an error message\) that also has a button connected to the same key that returns to the first dialog box. That way you can place a shoe \(or Coke can, as I often do, but sweeping off a cowboy boot has a certain drama to it\) on the keyboard and walk away. Let the test run for an hour. If there’s a resource or memory leak, this kind of test will expose it. 49 | * Blink Test: Find some aspect of the product that produces huge amounts of data or does some operation very quickly. For instance, look a long log file or browse database records very quickly. Let the data go by too quickly to see in detail, but notice trends in length or look or shape of the data. Some bugs are easy to see this way that are hard to see with detailed analysis. Use Excel’s conditional formatting feature to highlight interesting distinctions between cells of data. 50 | * Error Message Hangover: Make error messages happen and test hard after they are dismissed. Often developers handle errors poorly. 51 | 52 | Resource Starvation: Progressively lower memory, disk space, display resolution, and other resources until the product collapses, or gracefully \(we hope\) degrades. 53 | 54 | * Multiple Instances: Run a lot of instances of the app at the same time. Open the same files. Manipulate them from different windows. 55 | * Crazy Configs: Modify the operating system’s configuration in non-standard or non-default ways either before or after installing the product. Turn on “high contrast” accessibility mode, or change the localization defaults. Change the letter of the system hard drive. Consider that the product has configuration options, too—change them or corrupt them in a way that should trigger an error message or an appropriate default behavior. 56 | * Cheap Tools: Learn how to use InCtrl5, Filemon, Regmon, AppVerifier, Perfmon, and Process Explorer, and Task Manager \(all of which are free\). Have these tools on a thumb drive and carry it around. Also, carry a digital camera. I now carry a tiny 3 megapixel camera and a tiny video camera. Both fit into my coat pockets. I use them to record screen shots and product behaviors. While it’s not cheap, you can usually find Excel on most Windows systems; use it to create test matrices, tables of test data, charts that display performance results, and so on. Use the World-Wide Web Consortium’s HTML Validator at [http://validator.w3c.org](http://validator.w3c.org). Pay special attention to tools that hackers use; these tools can be used for good as well as for evil. Netcat, Burp Proxy, wget, and fuzzer are but a few examples. 57 | 58 | -------------------------------------------------------------------------------- /test-design/README.md: -------------------------------------------------------------------------------- 1 | # Test Design 2 | 3 | Test Design is about creating effective tests. 4 | 5 | This catalog is an attempt to research and consolidate the valuable test design content out there in the world and bring it into one place. Maybe one day I'll create a separate website for the social good of the software community. I believe articles, videos, books and examples that others can use to develop competence in these skills will help the community at large. -------------------------------------------------------------------------------- /test-design/automate.md: -------------------------------------------------------------------------------- 1 | # What part of the process do you want to automate? 2 | 3 | - configuring the system 4 | - generating data for the system 5 | - operating some aspect of the system 6 | 7 | - observing aspects of the state of the system before/after configuration 8 | - observing some aspects of the system's behaviour 9 | - observing some aspects of the system's output 10 | - observing records that the system produces as it behaves (i.e., typically, log files) 11 | - observing aspects of the state of the system after it has performed some function 12 | 13 | - applying decision rules to the operations and observations above 14 | - aggregating the outcome of any of the operations or observations above 15 | - parsing, sorting, or filtering data 16 | - performing analysis of that data 17 | - comparing that output to some specified and desired (or perhaps undesired) output 18 | - alerting someone to the the outcomes of operations and observations 19 | 20 | - representing the aggregated outcomes in different forms (for example, rendering data from log files as tables, charts, or graphs, in order to help testers see patterns of behavior or output) 21 | 22 | 23 | Scientific Method says: 24 | 25 | 1. Observe & Question 26 | 2. Form a Hypothesis 27 | 3. Test with experiment 28 | 4. Analyze data 29 | 5. Interepret the data 30 | 6. Report conclusions -------------------------------------------------------------------------------- /test-design/domain.md: -------------------------------------------------------------------------------- 1 | # Domain Testing 2 | 3 | Domain Testing is our fields most widely taught technique. You might know it by another name, equivalence class analysis or boundary testing. Together these are described as Domain Testing where a domain is a set of set of values. 4 | 5 | - Equivalence class is about finding similarity. Two values are in the same class if they are so similar the program would treat them the same way. Testing with one or two best representatives from each class allows us to substantially reduce the number of required tests. 6 | 7 | - Boundary testing is about selecting and using a boundary value in an attempt to show the program in-appropriately accepts a value instead of rejecting it. 8 | 9 | It should be possible to automate domain tests. 10 | 11 | 12 | ## Domain Testing Schema Overview 13 | 14 | - What are the potentially interesting variables? 15 | - Which variable are you analyzing? 16 | - What is the primary dimension of this variable? 17 | - What is the type and scale of the primary dimension? 18 | - Can you order the variable? 19 | - Is this variable an input or result? Why? 20 | - How does the program use this variable? 21 | - What other variables are related to this one? 22 | - How would you partition the variable? 23 | - Lay out the analysis in a classical boundary/equivalence table, including best representatives. 24 | - What tests would you create for the consequences? 25 | - What are some of the secondary dimensions that apply to this variable? 26 | - Summarize your analysis with a risk/equivalence table. 27 | - Analyze independent variables that should be tested together 28 | - Analyze variables that hold results 29 | - Analyze non-independent variables. Deal with relationships and constraints 30 | - Identify and list unanalyzed variables. Gather information for later analysis 31 | - Imagine and document risks that don't necessarily map to obvious dimension -------------------------------------------------------------------------------- /test-design/functional.md: -------------------------------------------------------------------------------- 1 | # Functional Testing 2 | 3 | The goal of functional testing is to focus on individual functions, testing them one by one. Often this goal is difficult to achieve on it's own. 4 | 5 | Possible Strategy: 6 | 7 | 1. Create a function list (an outline of functions in the app or in the release) 8 | - Initial function list can be an outline 9 | - Becomes more fully-detailed over time (if this is your primary technique) 10 | 2. Do this testing first before attempting more complex tests that involve several functions, might find blockers 11 | -------------------------------------------------------------------------------- /test-design/oracles.md: -------------------------------------------------------------------------------- 1 | # Test Oracles 2 | 3 | Oracles are interesting \(terminology\) because the original oracles were from Greece and were mythological. In software testing, an oracle is a tool that helps you decide whether the program passed your test. Oracles are heuristic - they're fallible. 4 | 5 | ## A List of Partial Oracles 6 | 7 | This list can be used in place of the tables from the AST-BBST Foundations Lectures. 8 | 9 | * **Constraint oracle:** We use the constraint oracle to check for impossible values or impossible relationships. For example an American ZIP code must be 5 or 9 digits. If you see something that is non-numeric or some other number of digits, it cannot be a ZIP code. A program that produces such a thing as ZIP code has a bug. 10 | * **Regression oracle:** We use the regression oracle to check results of the current test against results of execution of the same test on a previous version of the product. 11 | * **Self-verifying data:** We use self-verifying data as an oracle. In this case, we embed the correct answer in the test data. For example, if a protocol specifies that when a program sends a message to another program, the other one will return a specific response \(or one of a few possible responses\), the test could include the acceptable responses. An automated test would generate the message, then check whether the response was in the list or was the specific one in the list that is expected for this message under this circumstance. 12 | * **Physical model:** We use a physical model as an oracle when we test a software simulation of a physical process. For example, does the movement of a character or object in a game violate the laws of gravity? 13 | * **Business model:** We use a business model the same way we use a physical model. If we have a model of a system, we can make predictions about what will happen when event X takes place. The model makes predictions. If the software emulates the business process as we intend, it should give us behavior that is consistent with those predictions. Of course, as with all heuristics, if the program "fails" the test, it might be the model that is wrong. 14 | * **Statistical model:** We use a statistical model to tell us that a certain behavior or sequence of behaviors is very unlikely, or very unlikely in response to a specific action. The behavior is not impossible, but it is suspicious. We can test whether the actual behavior in the test is within the tolerance limits predicted by the model. This is often useful for looking for patterns in larger sets of data \(longer sequences of tests\). For example, suppose we expect an eCommerce website to get 80% of its customers from the local area, but in beta trials of its customer-analysis software, the software reports that 70% of the transactions that day were from far away. Maybe this was a special day, but probably this software has a bug. If we can predict a statistical pattern \(correlations among variables, for example\), we can check for it. 15 | * **State model:** Another type of statistical oracle starts with an input stream that has known statistical characteristics and then check the output stream to see if it has the same characteristics. For example, send a stream of random packets, compute statistics of the set, and then have the target system send back the statistics of the data it received. If this is a large data set, this can save a lot of transmission time. Testing transmission using checksums is an example of this approach. \(Of course, if a message has a checksum built into the message, that is self-verifying data.\) 16 | * **Interaction model:** We use a state model to specify what the program does in response to an input that happens when it is in a known state. A full state model specifies, for every state the program can be in, how the program will response \(what state it will transition to\) for every input. 17 | * **Calculation oracle:** We use calculation oracles to check the calculations of a program. For example, if the program adds 5 numbers, we can use some other program to add the 5 numbers and see what we get. Or we can add the numbers and then successively subtract one at a time to see if we get a zero. 18 | * **Inverse oracle:** The inverse oracles is often a special case of a calculation oracle \(the square of the square root of 2 should be 2\) but not always. For example, imagine taking a list that is sorted low to high, sorting it high to low and then sorting it low to high. Do we get back the same list? 19 | * **Reference program:** The reference program generates the same responses to a set of inputs as the software under test. Of course, the behavior of the reference program will differ from the software under test in some ways \(they would be identical in all ways only if they were the same program\). For example, the time it takes to add 1000 numbers might be different in the reference program versus the software under test, but if they ultimately yield the same sum, we can say that the software under test passed the test. 20 | 21 | Note: This list is incomplete and additional oracles \(specific enough to support automation\) should be added to this list. 22 | 23 | -------------------------------------------------------------------------------- /test-design/regression.md: -------------------------------------------------------------------------------- 1 | # Regression Testing 2 | 3 | **RCRCRC** 4 | 5 | A regression testing heuristic. 6 | 7 | * Recent: new features, new areas of code are more vulnerable 8 | * Core: essential functions must continue to work 9 | * Risk: some areas of an application pose more risk 10 | * Configuration sensitive: code that’s dependent on environment settings can be vulnerable 11 | * Repaired: bug fixes can introduce new issues 12 | * Chronic: some areas in an application may be perpetually sensitive to breaking 13 | * [http://karennicolejohnson.com/2009/11/a-heuristic-for-regression-testing/](http://karennicolejohnson.com/2009/11/a-heuristic-for-regression-testing/) 14 | 15 | -------------------------------------------------------------------------------- /test-design/security.md: -------------------------------------------------------------------------------- 1 | # Security 2 | 3 | The following should help with thinking about and testing for software security. 4 | 5 | ## RISKS / Potential Failures 6 | 7 | (Might make sense to move this into data or into a separate area) 8 | 9 | * [Reset Password, token sent in reply](https://www.troyhunt.com/hacking-grindr-accounts-with-copy-and-paste/). Allows hacker to reset password and take over account. 10 | 11 | 12 | ## MNEMONICs 13 | 14 | * [EXTERMINATE](https://www.slideshare.net/andreicontan/daniel-billing-exploring-the-security-testers-toolbox) 15 | * STRIDE 16 | 17 | **STRIDE** 18 | 19 | The STRIDE Threat Model for security testing. 20 | 21 | * Spoofing identity. An example of identity spoofing is illegally accessing and then using another user's authentication information, such as username and password. 22 | * Tampering with data. Data tampering involves the malicious modification of data. Examples include unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet. 23 | * Repudiation. Repudiation threats are associated with users who deny performing an action without other parties having any way to prove otherwise—for example, a user performs an illegal operation in a system that lacks the ability to trace the prohibited operations. Nonrepudiation refers to the ability of a system to counter repudiation threats. For example, a user who purchases an item might have to sign for the item upon receipt. The vendor can then use the signed receipt as evidence that the user did receive the package. 24 | * Information disclosure. Information disclosure threats involve the exposure of information to individuals who are not supposed to have access to it—for example, the ability of users to read a file that they were not granted access to, or the ability of an intruder to read data in transit between two computers. 25 | * Denial of service. Denial of service \(DoS\) attacks deny service to valid users—for example, by making a Web server temporarily unavailable or unusable. You must protect against certain types of DoS threats simply to improve system availability and reliability. 26 | * Elevation of privilege. In this type of threat, an unprivileged user gains privileged access and thereby has sufficient access to compromise or destroy the entire system. Elevation of privilege threats include those situations in which an attacker has effectively penetrated all system defenses and become part of the trusted system itself, a dangerous situation indeed. 27 | 28 | According to the website: When you are considering threats, it is useful to ask questions such as these: 29 | 30 | * How can an attacker change the authentication data? 31 | * What is the impact if an attacker can read the user profile data? 32 | * What happens if access is denied to the user profile database? 33 | * [https://msdn.microsoft.com/en-US/library/ee823878.aspx](https://msdn.microsoft.com/en-US/library/ee823878.aspx) 34 | 35 | -------------------------------------------------------------------------------- /test-design/test_techniques.md: -------------------------------------------------------------------------------- 1 | # Test Techniques 2 | 3 | Most of this information is from BBST Test Design, BBST Exploratory test slides and from Lessons Learned in Software Testing. Also from Kaner & Fiedler's Test Techniques Mind map. 4 | 5 | This isn't meant to be a complete taxonomy but a partial listing.  6 | 7 | A test technique is a method of designing, running and interpreting the results of tests. 8 | 9 | ## Testers 10 | 11 | People-Based Testing; Who does the testing? 12 | 13 | Tester-based techniques focus on who does the testing which might have little to do with what you imagine will happen. 14 | 15 | ### User testing 16 | 17 | - Testing with the types of people who typically use the product. - Kaner, Bach, Pettichord (2002) 18 | 19 | ### Alpha testing 20 | 21 | - In-house testing performed by the test team. - Kaner, Bach, Pettichord (2002) 22 | 23 | ### Beta testing 24 | 25 | - A type of user testing that uses testers who aren't part of the organization and who are members of your products target market. - Kaner, Bach, Pettichord (2002) 26 | 27 | ### Bug bashes 28 | 29 | - In-house testing using anyone who is available. - Kaner, Bach, Pettichord (2002) 30 | 31 | ### Subject-matter expert testing 32 | 33 | - Give the product to an expert on some issues addressed by the software and request feedback (bugs, criticisms, and compliments). - Kaner, Bach, Pettichord (2002) 34 | 35 | ### Paired testing 36 | 37 | - Two testers work together to find bugs. 38 | 39 | ### Eat your own dog food 40 | 41 | - Your company uses and relies on prerelease versions of its own software. 42 | 43 | ### Localization testing 44 | 45 | ## Coverage 46 | 47 | Coverage-Based Testing is what gets tested. 48 | 49 | In principle, coverage-based techniques direct you to run every test of a given type. In practice, you probably won't but you might measure your coverage of that type of testing 50 | 51 | ### Function testing 52 | 53 | - Test every function, one by one. Test the function thoroughly, to the extent you can say with confidence that the function works. 54 | - Create a Function List 55 | - Glass box function testing is usually called unit testing 56 | - Sometimes called feature testing 57 | 58 | ### Function integration testing 59 | 60 | - Test several functions or features together, to see how they work together. 61 | - Sometimes called feature integration testing 62 | 63 | ### Menu tour 64 | 65 | - Walk through all of the menus and dialogs in a GUI product, taking every choice available. - Kaner, Bach, Pettichord (2002) 66 | 67 | ### Domain testing 68 | 69 | - A domain is a (mathematical) set that includes all possible values of a variable of a function. The variables might be input or output variables. For each variable, you partition its set of possible values into equivalence classes and pick a small number of representatives from each class. - Kaner, Bach, Pettichord (2002) 70 | - Use domain testing when you are just learning the program; working through the program with the goal of finding every variable and testing it is a good way to find bugs. 71 | 72 | ### Equivalence class analysis 73 | 74 | - An equivalence class is a set of values for a variable that you consider equivalent. Tests are equivalent if you believe that they all do the same thing. Once you've found an equivalence class, test only one or two of its members. 75 | - Part of domain testing 76 | 77 | ### Boundary testing 78 | 79 | - An equivalence class is a set of values. If you map them onto a number line, the boundary values are the smallest and largest members of the class. In boundary testing you test these and you also test the boundary values of nearby classes that are just smaller than the smallest member of the class you're testing and just larger than the largest member of the class you're testing. 80 | - Part of domain testing 81 | 82 | ### Best representative testing 83 | 84 | - A best representative of an equivalence class is a value that is at least as likely as any other value in the class to expose an error in the software. 85 | 86 | ### Test Idea Catalogs 87 | 88 | - For each type of input field, you can develop a fairly standard set of tests and reuse it for similar fields in this product and later products. - Kaner, Bach, Pettichord (2002) 89 | - Also called input field test catalogs or matrices 90 | 91 | ### Map and test all the ways to edit a field 92 | 93 | - You can often change the value of a field in several ways. - Kaner, Bach, Pettichord (2002) 94 | 95 | ### Logic testing 96 | 97 | - Variables have relationships in the program. Logic testing attempts to check every logical relationship in the program. Cause-effect graphing works here. 98 | 99 | ### State-based testing 100 | 101 | - A program moves from state to state. In a given state, some inputs are valid and others are ignored or rejected. In state-based testing, you walk the program through a large set of state transitions (state changes) and check the results carefully, every time. 102 | - Sometimes called State Transitions 103 | 104 | ### Path testing 105 | 106 | - A path includes all of the steps that you took or all of the statements that the program passed through in order to get to your current state. Path testing involves many paths through the program. - Kaner, Bach, Pettichord (2002) 107 | 108 | ### Statement and branch coverage 109 | 110 | - You achieve 100 percent statement coverage when your tests execute every statement (or line of code) in the program. You achieve 100 percent statement and branch coverage if you execute every statement and every branch from one statement to another. 111 | 112 | ### Configuration coverage 113 | 114 | - Configuration coverage measures the percentage of configuration tests that you have run (and the program has passed), compared to the total number of configuration tests that you plan to run. 115 | 116 | ### Specification-based testing 117 | 118 | - Testing focused on verifying every factual claim (shown to be true or false) that is made about the product in the specification. - Kaner, Bach, Pettichord (2002) 119 | 120 | ### Requirements-based testing 121 | 122 | - Testing focused on proving that the program satisfies every requirement in a requirements document. - Kaner, Bach, Pettichord (2002) 123 | 124 | ### Combination testing 125 | 126 | - Testing two or more variables in combination with each other. - Kaner, Bach, Pettichord (2002) 127 | 128 | ### Localization testing 129 | 130 | - Testing focused on verifying that the program conforms to a particular locale or culture. 131 | 132 | ### Compliance-driven testing 133 | 134 | ### User Interface testing 135 | 136 | ### Tours 137 | 138 | I've listed all of the tours from the BBST Test Design class but that's it. There are many more tours available. 139 | 140 | - Function or feature tour 141 | 142 | - Menu's and Windows tour 143 | - Mouse and Keyboard tour 144 | 145 | - Transaction tour 146 | - Error message tour 147 | - Variables tour 148 | - Data tour 149 | - Sample Data tour 150 | - Structure tour 151 | 152 | - Code 153 | - Data 154 | - Interfaces 155 | 156 | - Operational modes tour 157 | - Sequence tour 158 | - Claims tour 159 | - Benefits tour 160 | - Market Context tour 161 | - User tour 162 | - Life History tour 163 | 164 | - Lead into Scenario Testing 165 | 166 | - Configuration tour 167 | - Interoperability tour 168 | - Compatibility tour 169 | - Testability tour 170 | - Specified-risk tour 171 | 172 | - Leads to Risk Catalog 173 | 174 | - Extreme Value tour 175 | - Complexity tour 176 | 177 | ## Risks 178 | 179 | Risk-based testing or Potential problems. Why you are testing (the risks you are testing for).  180 | 181 | ### Boundary testing 182 | 183 | ### Quicktests 184 | 185 | - A quicktest is an inexpensive test, optimized for a common type of software error, that requires little time or product-specific preparation or knowledge to perform. They are a great way to start testing a product. - BBST Foundations 186 | 187 | ### Constraints 188 | 189 | - Input constraints 190 | 191 | - A constraint is a limit on what the program can handle. - Kaner, Bach, Pettichord (2002) 192 | 193 | - Output constraints 194 | 195 | - The inputs were legal, but they led to output values that the program could not handle. - Kaner, Bach, Pettichord (2002) 196 | 197 | - Computation constraints 198 | 199 | - The inputs and outputs are fine but in the course of calculating a value (that will lead to an output), the program fails. - Kaner, Bach, Pettichord (2002) 200 | 201 | - Storage (or data) constraints 202 | 203 | - Inputs, outputs, and calculations are legal, but the operations run the program out of memory or yield data files that are too enormous to process. 204 | 205 | ### Local expressions 206 | 207 | ### Stress testing 208 | 209 | ### Load testing 210 | 211 | ### Performance testing 212 | 213 | ### History-based testing 214 | 215 | ### Risk-based multivariable testing 216 | 217 | ### Usability testing 218 | 219 | ### Configuration/compatibility testing 220 | 221 | ### Interoperability testing 222 | 223 | ### Long-sequence regression 224 | 225 | ### Additional tips 226 | 227 | - If you do risk-based testing, also do comparable non-risk based testing to test for the risk you missed. 228 | - Test for timing issues. 229 | - When you create a test, always create a test procedure that will force the program to use the test data that you have entered, allowing you to determine whether it's using that data incorrectly. - Whittaker (2002) 230 | 231 | ## Activities 232 | 233 | Activity-Based Testing; How you test. 234 | 235 | These techniques focus on "how to" test and might most closely match the classical notion of a "technique." 236 | 237 | ### Guerilla testing 238 | 239 | - A fast and vicious attack on the program. A form of exploratory testing that is usually time-boxed and done by an experienced exploratory tester. - Kaner, Bach, Pettichord (2002) 240 | 241 | ### All-Pairs testing 242 | 243 | ### Random testing 244 | 245 | ### Use cases 246 | 247 | - Tests derived from use cases. Also called use case flow tests or scenario tests. 248 | 249 | ### Scenario testing 250 | 251 | - A scenario test normally involves four attributes: 1 The test must be realistic and it should reflect something that customers must do; 2 the test should be complex, involving several features, in a way that should be challenging to the program; 3 It should be easy and quick to tell whether the program passed or failed the test; 4 A stakeholder is likely to argue vigorously that the program should be fixed if it fails this test. - Kaner, Bach, Pettichord (2002) 252 | 253 | ### Installation testing 254 | 255 | - Install the software in the various ways and on the various types of systems that it can be installed. Check which files are added or changed on disk. Does the installed software work? What happens when you uninstall? - Kaner, Bach, Pettichord (2002) 256 | 257 | ### Regression testing 258 | 259 | - Regression testing involves reuse of the same tests, so you can retest after change. - Kaner, Bach, Pettichord (2002) 260 | - Bug fix regression 261 | 262 | - After reporting a bug and hearing later on that it's fixed. Prove the fix was no good. 263 | 264 | - Old bugs regression 265 | 266 | - Prove that a change to the software has caused an old bug fix to become unfixed. 267 | 268 | - Side-effect regression (stability) 269 | 270 | - The goal is to prove that the change has caused something that used to work to now be broken. 271 | 272 | ### Load testing 273 | 274 | - The program or system under test is attacked, by being run on a system that is facing many demands for resources. Under a high load, the system will probably fail, but the pattern of events lading to the failure will point to vulnerabilities in the software or system under test that might be exploited under more normal use of the software under test. 275 | 276 | ### Long-sequence testing 277 | 278 | - Testing done overnight or for days or weeks. The goal is to discover errors that short sequence tests will miss. Examples of errors include wild pointers, memory leaks, stack overflows, etc. - Kaner, Bach, Pettichord (2002) 279 | 280 | ### Performance testing 281 | 282 | - These tests are usually run to determine how quickly the program runs, in order to decide whether optimization is needed. But the tests can expose many other bugs. 283 | 284 | ### High Volume Automated Testing 285 | 286 | - http://kaner.com/?p=278 287 | 288 | - A general concept with multiple techniques 289 | 290 | ## Evaluation / Oracle 291 | 292 | Evaluation-Based Testing. 293 | 294 | How to tell whether the test passed or failed. Build a set of tests around a well-specified oracle 295 | 296 | An oracle is an evaluation tool that will tell you whether the program has passed or failed a test. In HiVAT the oracle is probably another program that generates results or checks the software under test's results. The oracle is generally more trusted than the software under test, so a concern flagged by the oracle is worth spending time and effort to check. 297 | 298 | ### Function equivalent testing 299 | 300 | ### Mathematical oracle 301 | 302 | ### Constraint checks 303 | 304 | ### Self-verifying data 305 | 306 | - The data files you use in testing carry information that lets you determine whether the output data is corrupt. 307 | 308 | ### Comparison with saved results 309 | 310 | - Regression testing (typically, but not always automated) in which pass or fail is determined by comparing the results you got today with the results from last week. If the result was correct last week, and it's different now, the difference might reflect a new defect. 311 | 312 | ### Comparison with a specification or other authoritative document 313 | 314 | - A mismatch with the specification could be an error. 315 | 316 | ### Heuristic consistency 317 | 318 | - Consistency is an important criterion for evaluating a program. Inconsistency may be a reason to report a bug, or it may reflect intentional design variation. 319 | - We work with 7 main consistency heuristics 320 | 321 | ### Diagnostics-based testing 322 | 323 | ### Verifiable state models 324 | 325 | ## Desired Result 326 | 327 | Document focused testing runs a set of tests primarily to collect data needed to fill out a form or create a clearly-structured report 328 | 329 | ### Smoke testing 330 | 331 | - Same thing as build verification? 332 | - This type of side-effect regression testing is done with the goal of proving that a new build is not worth testing. 333 | 334 | ### Confirmation testing 335 | 336 | ### User acceptance testing 337 | 338 | ### Certification testing 339 | 340 | ## Glass box techniques 341 | 342 | ### Unit tests 343 | 344 | ### Functional tests below the UI level 345 | 346 | ### Boundary testing 347 | 348 | ### State transitions 349 | 350 | ### Risk-based 351 | 352 | ### Dataflows 353 | 354 | ### Program slicing 355 | 356 | ### Protocol testing 357 | 358 | ### Diagnostics-driven testing 359 | 360 | ### Performance testing 361 | 362 | ### Compliance-focused testing 363 | 364 | ### Glass-box regression testing 365 | 366 | ### Glass-box decision coverage 367 | 368 | ### Glass-box path coverage 369 | 370 | ## Testing Approaches 371 | 372 | Testing approaches are different than techniques. Include the BBST definitions. 373 | 374 | You can use any techniques across any approach. Approaches just like techniques can overlap. 375 | 376 | ### Exploratory testing 377 | 378 | - We expect the tester to learn, throughout the project, about the project, its market, its risk, and the ways in which it has failed previous tests. New tests are constantly created and used. They're more powerful than older tests because they're based on the tester's continuously increasing knowledge. 379 | 380 | ### Scripted testing 381 | 382 | - Manual testing, typically done by a junior tester who follows a step-by-step procedure written by a more senior tester. 383 | - Automated test execution or comparison by a machine 384 | 385 | ### Glass box testing 386 | 387 | ### Black box testing 388 | 389 | -------------------------------------------------------------------------------- /test-design/unit.md: -------------------------------------------------------------------------------- 1 | # Unit Testing 2 | 3 | A small test that makes sure a certain unit of your codebase works as intended. They have the smallest and narrowest scope of all your tests which result in very quick run times. What is a unit? It depends on your context. 4 | 5 | Here's an example from Martin Fowler: 6 | "If you're working in a functional language a unit will most likely be a single function. Your unit tests will call a function with different parameters and ensure that it returns the expected values. In an object-oriented language a unit can range from a single method to an entire class." [Martin Fowler](https://martinfowler.com/articles/practical-test-pyramid.html) 7 | 8 | 9 | ## Test Doubles 10 | 11 | Test Doubles replace a real thing with a fake thing, returning a canned response instead (that you specify ahead of time in your test). 12 | 13 | - Dummies: objects are passed around but never actually used. Usually they are just used to fill parameter lists. 14 | - Fakes: objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example). 15 | - Mocks: are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting. 16 | - Spies: are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent. 17 | - Stubs: provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. 18 | 19 | Definitions from [Martin Fowler](https://martinfowler.com/bliki/TestDouble.html) --------------------------------------------------------------------------------