├── .github └── workflows │ └── rust.yml ├── .gitignore ├── Cargo.toml ├── LICENSE ├── README.md ├── example.png ├── examples ├── all_keywords.py ├── line_ending_difference.py ├── multiline.py └── tab_difference.py ├── rustfmt.toml ├── src ├── config.rs ├── diff_printer.rs ├── error.rs ├── lib.rs ├── main.rs └── runner.rs └── tests └── tests.rs /.github/workflows/rust.yml: -------------------------------------------------------------------------------- 1 | name: Rust 2 | 3 | on: 4 | push: 5 | branches: [ master ] 6 | pull_request: 7 | branches: [ master ] 8 | 9 | env: 10 | CARGO_TERM_COLOR: always 11 | 12 | jobs: 13 | build: 14 | 15 | runs-on: ubuntu-latest 16 | 17 | steps: 18 | - uses: actions/checkout@v2 19 | - name: Build 20 | run: cargo build --verbose 21 | - name: Run tests 22 | run: cargo test --verbose 23 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | Cargo.lock 3 | .vscode 4 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "goldentests" 3 | version = "1.3.1" 4 | authors = ["Jake Fecher "] 5 | edition = "2018" 6 | license-file = "LICENSE" 7 | keywords = ["testing", "tests", "golden"] 8 | categories = ["development-tools::testing"] 9 | description = "A golden file testing library where tests can be configured within the same test file" 10 | homepage = "https://github.com/jfecher/golden-tests" 11 | repository = "https://github.com/jfecher/golden-tests" 12 | readme = "README.md" 13 | documentation = "https://docs.rs/goldentests" 14 | 15 | [lib] 16 | name = "goldentests" 17 | 18 | [[bin]] 19 | name = "goldentests" 20 | required-features = ["binary"] 21 | doc = false 22 | 23 | [dependencies] 24 | colored = "2.0.0" 25 | shlex = "1.1.0" 26 | similar = "2.1.0" 27 | rayon = { version = "1.5.1", optional = true } 28 | indicatif = { version = "0.16.2", optional = true } 29 | 30 | # clap is only needed for the goldentest binary, 31 | # enabling it will have no effect on the library version 32 | clap = { version = "3.0.14", features = ["derive"], optional = true } 33 | 34 | [features] 35 | default = ["parallel"] 36 | binary = ["parallel", "progress-bar", "clap"] 37 | parallel = ["rayon"] 38 | progress-bar = ["indicatif"] 39 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 jfecher 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | ## Golden Tests 3 | 4 | [![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fjfecher%2Fgolden-tests%2Fbadge&style=flat)](https://actions-badge.atrox.dev/jfecher/golden-tests/goto) 5 | [![crates.io](https://img.shields.io/crates/v/goldentests)](https://crates.io/crates/goldentests) 6 | [![docs.rs](https://docs.rs/goldentests/badge.svg)](https://docs.rs/goldentests) 7 | 8 | Golden tests is a golden file testing library configured so that tests 9 | can be created and edited from the test files alone without ever touching 10 | the source code of your compiler, interpreter, or other tool. 11 | 12 | ### Why golden tests? 13 | 14 | Golden tests allow you to specify the output of 15 | some command within a file and automatically ensure 16 | that that output doesn't change. If it does, goldentests 17 | will show an error-diff showing the expected and actual 18 | output. This way, whenever the output of something changes 19 | a human can see the change and decide if it should be kept 20 | or is a bug and should be reverted. 21 | 22 | ### What are golden tests useful for? 23 | 24 | Golden tests are especially useful for applications that 25 | take a file as input and produce output of some kind. For 26 | example: compilers and config-parsers (well, parsers in general) 27 | are two such applications that can benefit from automated golden 28 | tests. In the case of a config parser, you would be able to 29 | provide many config examples as tests and ensure that your 30 | parser was able to read the files with the expected stdout/stderr 31 | output and exit code. 32 | 33 | ### Example Output 34 | 35 | ![example image](example.png) 36 | 37 | ### Getting Started 38 | 39 | As of version 1.1, there are now two ways to use goldentests - either as a 40 | standalone binary or as a rust integration test. If you want to run it as 41 | a binary, continue on. If not, skip ahead to the next section. With that 42 | out of the way, we can install goldentests via: 43 | 44 | ```sh 45 | $ cargo install goldentests --features binary 46 | ``` 47 | 48 | An example usage looks like this: 49 | 50 | ```sh 51 | $ goldentests /bin/python path-to-tests '# ' 52 | ``` 53 | 54 | This will tell goldentests to run `/bin/python` on each file in the `path-to-tests` 55 | directory. You'll likely want to alias this command with your preferred arguments 56 | for easier testing. An example test for us may look like this: 57 | 58 | ```py 59 | print("Hello, World!") 60 | 61 | # args: -b 62 | # expected stdout: 63 | # Hello, World! 64 | ``` 65 | 66 | This file tells goldentests to run the command `/bin/python -b path-to-tests/example.py` and issue 67 | an error if the output of the command is not "Hello, World!". 68 | 69 | Note that there are test keywords `args:` and `expected stdout:` embedded in the comments. 70 | This is what the `'# '` parameter was when we invoked goldentests. You can change this parameter 71 | to change the prefix that goldentests looks for when parsing a file. For most languages, 72 | this should be a comment of some kind. E.g. if we we're testing haskell, we would use `-- ` 73 | as the test-line prefix. 74 | 75 | #### As a rust integration test 76 | 77 | The second way to use goldentests is as a rust library for writing 78 | integration tests. Using this method will have `goldentests` run 79 | each time you call `cargo test`. To get started plop this into your `Cargo.toml`: 80 | ```toml 81 | goldentests = "1.3" 82 | ``` 83 | 84 | And create an integration test in `tests/goldentests.rs`. The specific name 85 | doesn't matter as long as the test can be picked up by cargo. A typical usage 86 | looks like this: 87 | 88 | ```rust 89 | use goldentests::{ TestConfig, TestResult }; 90 | 91 | #[test] 92 | fn run_golden_tests() -> TestResult<()> { 93 | let config = TestConfig::new("target/debug/my-binary", "my-test-path", "// "); 94 | config.run_tests() 95 | } 96 | ``` 97 | 98 | This will tell goldentests to find all files recursively in `my-test-path` and 99 | run `target/debug/my-binary` to use the files in some way to produce the expected 100 | output. For example, if we're testing a compiler for a C-like language a test 101 | file for us may look like this: 102 | 103 | ```c 104 | puts("Hello, World!"); 105 | 106 | // args: --run 107 | // expected stdout: 108 | // Hello, World! 109 | ``` 110 | 111 | This will run the command `target/debug/my-binary --run my-test-path/example.c` and will issue 112 | an error if the output of the command is not "Hello, World!". 113 | 114 | Note that there are test keywords `args:` and `expected stdout:` embedded in the comments. 115 | This is what the `"// "` parameter was in the rust example. You can change this parameter 116 | to change the prefix that goldentests looks for when parsing a file. For most languages, 117 | this should be a comment of some kind. E.g. if we we're testing haskell, we would use `-- ` 118 | as the test-line prefix. 119 | 120 | It can sometimes be convenient when using golden-tests via the Rust testing setup to have 121 | arguments that are included by default for every program. These can be added by setting 122 | the `base_args` and `base_args_after` fields of the `TestConfig` object. Among other things, 123 | this can be used to easily re-run a set of tests with different arguments. 124 | 125 | ### Advanced Usage 126 | 127 | Here is the full set of keywords goldentests looks for in the file: 128 | 129 | - `args: `: Anything after this keyword will be used as command-line arguments for the 130 | program that was specified when creating the `TestConfig`. These arguments will all be placed before the file argument. 131 | - `args after: `: Anything after this keyword will be used as command-line arguments for the 132 | program that was specified when creating the `TestConfig`. These arguments will all be placed after the file argument. 133 | - `expected stdout: `: This keyword will continue reading characters, appending 134 | them to the expected stdout output until it reaches a line that does not start with the test prefix 135 | ("// " in the example above). If the stdout when running the program differs from the string given here, 136 | an appropriate error will be issued with a given diff. Defaults to `""`. 137 | - `expected stderr: `: The same as `expected stdout:` but for the `stderr` stream. Also 138 | defaults to `""`. 139 | - `expected exit status: [i32]`: If specified, goldentests will issue an error if the exit status differs 140 | to what is expected. Defaults to `None` (exit status is ignored by default). 141 | 142 | 143 | You can even configure the specific keywords used if you want. For any further information, 144 | check out goldentest's documentation [here](https://docs.rs/goldentests). 145 | 146 | ### Automatically updating tests 147 | 148 | Optionally, tests can be automatically updated by passing the `--overwrite` 149 | flag when running goldentests as a standalone program, or by setting the 150 | `overwrite_tests` flag when running as a rust library. Doing this will update 151 | the expected output in each file so that it matches the actual output. Since 152 | this is all automatic, make sure to manually review any changes before using 153 | this flag. 154 | 155 | ### Features 156 | 157 | Given below is a list of each crate feature as well as whether it is enabled by default: 158 | 159 | - `binary` (disabled): Build `goldentests` as a standalone binary rather than a rust testing library 160 | - `progress-bar` (disabled): Display a progress bar while testing. Useful if running many tests but `cargo test` hides the output of tests until it finishes by default so this is by default only enabled if `binary` is enabled. If you want to use this with `cargo test`, you can still enable this and make sure to pass the `no-capture` flag to `cargo test` when running. 161 | - `parallel` (enabled): Run tests in parallel. 162 | -------------------------------------------------------------------------------- /example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jfecher/golden-tests/9be5d9b4820ee354fe49d87500a37d111c10725e/example.png -------------------------------------------------------------------------------- /examples/all_keywords.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import sys 3 | 4 | print("error!", file=sys.stderr) 5 | sys.exit(3) 6 | 7 | # args: -c 'print("test"); exec(open("examples/all_keywords.py").read())' 8 | # expected exit status: 3 9 | # expected stdout: test 10 | 11 | # expected stderr: error! 12 | 13 | -------------------------------------------------------------------------------- /examples/line_ending_difference.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | print("Test1\r\nTest2\n\n\n\nhi\n") 4 | 5 | # Change the number of empty lines here to test error output 6 | # expected stdout: 7 | # Test1 8 | # Test2 9 | # 10 | # 11 | # 12 | # hi 13 | -------------------------------------------------------------------------------- /examples/multiline.py: -------------------------------------------------------------------------------- 1 | for i in range(0, 10): 2 | print(i) 3 | 4 | # expected stdout: 5 | # 0 6 | # 1 7 | # 2 8 | # 3 9 | # 4 10 | # 5 11 | # 6 12 | # 7 13 | # 8 14 | # 9 15 | -------------------------------------------------------------------------------- /examples/tab_difference.py: -------------------------------------------------------------------------------- 1 | # Make sure differences of a single space/tab can be spotted. 2 | # Unfortunately, many terminals (including mine) don't support 3 | # coloring the foreground or background of tab characters. So 4 | # instead of trying to highlight the character, goldentests will 5 | # issue a warning for Add lines that contain a tab character. 6 | 7 | # Tab used here 8 | print("Hello, Tab!"); 9 | 10 | # But space expected here (uncomment line 11 and comment line 14 for a test error): 11 | # # expected stdout: 12 | # Hello, Tab! 13 | 14 | # expected stdout: 15 | # Hello, Tab! 16 | -------------------------------------------------------------------------------- /rustfmt.toml: -------------------------------------------------------------------------------- 1 | max_width = 120 2 | chain_width = 100 3 | imports_granularity = "crate" -------------------------------------------------------------------------------- /src/config.rs: -------------------------------------------------------------------------------- 1 | use std::path::PathBuf; 2 | 3 | pub struct TestConfig { 4 | /// The binary path to your program, typically "target/debug/myprogram" 5 | pub binary_path: PathBuf, 6 | 7 | /// The path to the directory containing your tests, or a single test file. 8 | /// 9 | /// If this is a directory, it will be searched recursively for all files. 10 | pub test_path: PathBuf, 11 | 12 | /// The sequence of characters starting at the beginning of a line that 13 | /// all test options should be prefixed with. This is typically a comment 14 | /// in your language. For example, if we had a C like language we could 15 | /// have "// " as the test_line_prefix to allow "expected stdout:" and friends 16 | /// to be read inside comments at the start of a line. 17 | pub test_line_prefix: String, 18 | 19 | /// The "args:" keyword used while parsing tests. Anything after 20 | /// `test_line_prefix + test_args_prefix` is read in as shell arguments to 21 | /// the program, passed before the test file path. 22 | pub test_args_prefix: String, 23 | 24 | /// The "args after:" keyword used while parsing tests. Anything after 25 | /// `test_line_prefix + test_args_after_prefix` is read in as shell 26 | /// arguments to the program, passed after the test file path. 27 | pub test_args_after_prefix: String, 28 | 29 | /// The "expected stdout:" keyword used while parsing tests. Any line starting 30 | /// with `test_line_prefix` after a line starting with `test_line_prefix + test_stdout_prefix` 31 | /// is appended to the expected stdout output. This continues until the first 32 | /// line that does not start with `test_line_prefix` 33 | /// 34 | /// Example with `test_line_prefix = "// "` and `test_stdout_prefix = "expected stdout:"` 35 | /// ```rust 36 | /// // expected stdout: 37 | /// // first line of stdout 38 | /// // second line of stdout 39 | /// 40 | /// // Normal comment, expected stdout is done being read. 41 | /// ``` 42 | pub test_stdout_prefix: String, 43 | 44 | /// The "expected stderr:" keyword used while parsing tests. Any line starting 45 | /// with `test_line_prefix` after a line starting with `test_line_prefix + test_stderr_prefix` 46 | /// is appended to the expected stderr output. This continues until the first 47 | /// line that does not start with `test_line_prefix` 48 | /// 49 | /// Example with `test_line_prefix = "-- "` and `test_stderr_prefix = "expected stderr:"` 50 | /// ```haskell 51 | /// -- expected stderr: 52 | /// -- first line of stderr 53 | /// -- second line of stderr 54 | /// 55 | /// -- Normal comment, expected stderr is done being read. 56 | /// ``` 57 | pub test_stderr_prefix: String, 58 | 59 | /// The "expected exit status:" keyword used while parsing tests. This will expect an 60 | /// integer after this keyword representing the expected exit status of the given test. 61 | /// 62 | /// Example with `test_line_prefix = "; "` and `test_exit_status_prefix = "expected exit status:"` 63 | /// ```rust 64 | /// // expected exit status: 0 65 | /// ``` 66 | pub test_exit_status_prefix: String, 67 | 68 | /// Flag the current output as correct and regenerate the test files. This assumes the order of 69 | /// the `goldenfiles` sections can be moved around. 70 | pub overwrite_tests: bool, 71 | 72 | /// Arguments to always include in the command-line args for testing the program. 73 | /// For example, if this is `foo` and the test specifies `args: bar baz` then the 74 | /// binary will be invoked via ` foo bar baz ` 75 | pub base_args: String, 76 | 77 | /// Arguments to always include in the command-line args for testing the program. 78 | /// For example, if this is `foo` and the test specifies `args after: bar baz` then the 79 | /// binary will be invoked via ` foo bar baz` 80 | pub base_args_after: String, 81 | } 82 | 83 | impl TestConfig { 84 | /// Creates a new TestConfig for the given binary path, test path, and prefix. 85 | /// 86 | /// If we were testing a C++-like language that uses `//` as its comment syntax, we 87 | /// may want our test keywords embedded in comments. Additionally, lets say our 88 | /// project is called "my-compiler" and our test path is "examples/goldentests". 89 | /// In that case we can construct a `TestConfig` like so: 90 | /// 91 | /// ```rust 92 | /// use goldentests::TestConfig; 93 | /// let config = TestConfig::new("target/debug/my-compiler", "examples/goldentests", "// "); 94 | /// ``` 95 | /// 96 | /// This will give us the default keywords when parsing our test files which allows 97 | /// us to write tests such as the following: 98 | /// 99 | /// ```cpp 100 | /// std::cout << "Hello, World!\n"; 101 | /// std::cerr << "Goodbye, World!\n"; 102 | /// 103 | /// // These are args to your program, so this: 104 | /// // args: --run 105 | /// // Gets translated to: target/debug/my-compiler --run testfile 106 | /// 107 | /// // The expected exit status is optional, by default it is not checked. 108 | /// // expected exit status: 0 109 | /// 110 | /// // The expected stdout output however is mandatory. If it is omitted, it 111 | /// // is assumed that stdout should be empty after invoking the program. 112 | /// // expected stdout: 113 | /// // Hello, World! 114 | /// 115 | /// // The expected stderr output is also mandatory. If it is omitted it is 116 | /// // likewise assumed stderr should be empty. 117 | /// // expected stderr: 118 | /// // Goodbye, World! 119 | /// ``` 120 | /// 121 | /// Note that we can still embed normal comments in the program even though our test 122 | /// line prefix was "// "! Any test line that doesn't start with a keyword like "args:" 123 | /// or "expected stdout:" is ignored unless it is following an "expected stdout:" or 124 | /// "expected stderr:", in which case it is appended to the expected output. 125 | /// 126 | /// If you want to change these default keywords you can also create a TestConfig 127 | /// via `TestConfig::with_custom_keywords` which will allow you to specify each. 128 | #[allow(unused)] 129 | pub fn new(binary_path: Binary, test_path: Tests, test_line_prefix: &str) -> TestConfig 130 | where 131 | Binary: Into, 132 | Tests: Into, 133 | { 134 | TestConfig::with_custom_keywords( 135 | binary_path, 136 | test_path, 137 | test_line_prefix, 138 | "args:", 139 | "args after:", 140 | "expected stdout:", 141 | "expected stderr:", 142 | "expected exit status:", 143 | false, 144 | ) 145 | } 146 | 147 | /// This function is provided in case you want to change the default keywords used when 148 | /// searching through the test file. This will let you change "expected stdout:" 149 | /// or any other keyword to "output I want ->" or any other arbitrary string so long as it 150 | /// does not contain "\n". 151 | /// 152 | /// If you don't want to change any of the defaults, you can use `TestConfig::new` to construct 153 | /// a TestConfig with the default keywords (which are listed in its documentation). 154 | pub fn with_custom_keywords( 155 | binary_path: Binary, 156 | test_path: Tests, 157 | test_line_prefix: &str, 158 | test_args_prefix: &str, 159 | test_args_after_prefix: &str, 160 | test_stdout_prefix: &str, 161 | test_stderr_prefix: &str, 162 | test_exit_status_prefix: &str, 163 | overwrite_tests: bool, 164 | ) -> TestConfig 165 | where 166 | Binary: Into, 167 | Tests: Into, 168 | { 169 | let binary_path = binary_path.into(); 170 | let test_path = test_path.into(); 171 | 172 | let test_line_prefix = test_line_prefix.to_string(); 173 | let prefixed = |s| format!("{}{}", test_line_prefix, s); 174 | 175 | TestConfig { 176 | binary_path, 177 | test_path, 178 | test_args_prefix: prefixed(test_args_prefix), 179 | test_args_after_prefix: prefixed(test_args_after_prefix), 180 | test_stdout_prefix: prefixed(test_stdout_prefix), 181 | test_stderr_prefix: prefixed(test_stderr_prefix), 182 | test_exit_status_prefix: prefixed(test_exit_status_prefix), 183 | test_line_prefix, 184 | overwrite_tests, 185 | base_args: String::new(), 186 | base_args_after: String::new(), 187 | } 188 | } 189 | } 190 | -------------------------------------------------------------------------------- /src/diff_printer.rs: -------------------------------------------------------------------------------- 1 | use colored::{Color, ColoredString, Colorize}; 2 | use similar::{Change, ChangeTag, DiffOp, TextDiff}; 3 | use std::fmt::{Display, Error, Formatter}; 4 | 5 | pub struct DiffPrinter<'a>(pub TextDiff<'a, 'a, 'a, str>); 6 | 7 | fn print_line_number(index: Option, f: &mut Formatter, colorizer: Colorizer) -> Result<(), Error> { 8 | let line_number = index.map_or_else(String::new, |line| (line + 1).to_string()); 9 | let line_number_string = format!("{:>3}| ", line_number); 10 | 11 | write!(f, "{}", colorizer.color(false, &line_number_string)) 12 | } 13 | 14 | fn fmt_line(f: &mut Formatter, index: Option, change: Change<&str>) -> Result<(), Error> { 15 | let colorizer = match change.tag() { 16 | ChangeTag::Delete => Colorizer::colored(Color::Red), 17 | ChangeTag::Equal => Colorizer::normal(), 18 | ChangeTag::Insert => Colorizer::colored(Color::Green), 19 | }; 20 | print_line_number(index, f, colorizer)?; 21 | 22 | writeln!( 23 | f, 24 | "{}", 25 | colorizer.color(false, change.to_string().strip_suffix('\n').unwrap()) 26 | ) 27 | } 28 | 29 | #[derive(Copy, Clone)] 30 | struct Colorizer { 31 | color: Color, 32 | pass: bool, 33 | } 34 | 35 | impl Colorizer { 36 | const fn colored(color: Color) -> Colorizer { 37 | Colorizer { color, pass: false } 38 | } 39 | 40 | const fn normal() -> Colorizer { 41 | Colorizer { 42 | color: Color::Black, 43 | pass: true, 44 | } 45 | } 46 | 47 | fn color(&self, background: bool, character: &str) -> ColoredString { 48 | if self.pass { 49 | character.normal() 50 | } else if background { 51 | character.on_color(self.color) 52 | } else { 53 | character.color(self.color) 54 | } 55 | } 56 | } 57 | 58 | impl Display for DiffPrinter<'_> { 59 | fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { 60 | for op in self.0.ops() { 61 | match op { 62 | DiffOp::Delete { .. } | DiffOp::Equal { .. } | DiffOp::Insert { .. } => { 63 | for change in self.0.iter_changes(op) { 64 | fmt_line(f, change.new_index(), change)?; 65 | } 66 | } 67 | DiffOp::Replace { 68 | new_index: start, 69 | new_len: len, 70 | .. 71 | } => { 72 | let mut iter = self.0.iter_changes(op); 73 | for (line, change) in (*start..).zip(iter.by_ref().take(*len)) { 74 | fmt_line(f, Some(line), change)?; 75 | } 76 | 77 | for change in iter { 78 | fmt_line(f, None, change)?; 79 | } 80 | } 81 | } 82 | } 83 | Ok(()) 84 | } 85 | } 86 | -------------------------------------------------------------------------------- /src/error.rs: -------------------------------------------------------------------------------- 1 | use std::{fmt, path::PathBuf}; 2 | 3 | use colored::Colorize; 4 | 5 | pub type TestResult = Result; 6 | 7 | // Inner test errors shouldn't be visible to the end-user, 8 | // they'll all be reported internally after running the tests 9 | pub(crate) enum InnerTestError { 10 | TestUpdated { path: PathBuf, errors: Vec }, 11 | TestFailed { path: PathBuf, errors: Vec }, 12 | IoError(PathBuf, std::io::Error), 13 | CommandError(PathBuf, std::process::Command, std::io::Error), 14 | ErrorParsingExitStatus(PathBuf, /*status*/ String, std::num::ParseIntError), 15 | ErrorParsingArgs(PathBuf, /*args*/ String), 16 | } 17 | 18 | impl fmt::Display for InnerTestError { 19 | fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { 20 | let s = |path: &PathBuf| path.to_string_lossy().bright_yellow(); 21 | 22 | match self { 23 | InnerTestError::TestFailed { path, errors } => { 24 | for (i, error) in errors.iter().enumerate() { 25 | write!(f, "{}: {}", s(path), error)?; 26 | if i + 1 != errors.len() { 27 | writeln!(f)?; 28 | } 29 | } 30 | Ok(()) 31 | } 32 | InnerTestError::TestUpdated { path, errors } => { 33 | for (i, error) in errors.iter().enumerate() { 34 | write!(f, "{} - UPDATED: {}", s(path), error)?; 35 | if i + 1 != errors.len() { 36 | writeln!(f)?; 37 | } 38 | } 39 | Ok(()) 40 | } 41 | InnerTestError::IoError(path, error) => { 42 | writeln!(f, "{}: {}", s(path), error) 43 | } 44 | InnerTestError::CommandError(path, command, error) => { 45 | writeln!(f, "{}: Error running `{:?}`: {}", s(path), command, error) 46 | } 47 | InnerTestError::ErrorParsingExitStatus(path, status, error) => { 48 | writeln!(f, "{}: Error parsing exit status '{}': {}", s(path), status, error) 49 | } 50 | InnerTestError::ErrorParsingArgs(path, args) => { 51 | writeln!(f, "{}: Error parsing test args: {}", s(path), args) 52 | } 53 | } 54 | } 55 | } 56 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | //! A testing library utilizing golden tests. 2 | //! 3 | //! ### Why golden tests? 4 | //! 5 | //! Golden tests allow you to specify the output of 6 | //! some command within a file and automatically ensure 7 | //! that that output doesn't change. If it does, goldentests 8 | //! will show an error-diff showing the expected and actual 9 | //! output. This way, whenever the output of something changes 10 | //! a human can see the change and decide if it should be kept 11 | //! or is a bug and should be reverted. 12 | //! 13 | //! ### What are golden tests useful for? 14 | //! 15 | //! Golden tests are especially useful for applications that 16 | //! take a file as input and produce output of some kind. For 17 | //! example: compilers and config-parsers (well, parsers in general) 18 | //! are two such applications that can benefit form automated golden 19 | //! tests. In the case of a config parser, you would be able to 20 | //! provide many config examples as tests and ensure that your 21 | //! parser was able to read the files with the expected stdout/stderr 22 | //! output and exit code. 23 | //! 24 | //! ### How do I get started? 25 | //! 26 | //! Include a test in your program that looks something like this: 27 | //! 28 | //! ```rust 29 | //! use goldentests::{ TestConfig, TestResult }; 30 | //! 31 | //! #[test] 32 | //! fn run_goldentests() -> TestResult<()> { 33 | //! // Replace "// " with your language's/parser's comment syntax. 34 | //! // This tells golden tests to embed its keywords in lines beginning with "// " 35 | //! let config = TestConfig::new("target/debug/my-binary", "directory/with/tests", "// ")?; 36 | //! config.run_tests() 37 | //! } 38 | //! ``` 39 | //! 40 | //! Now you can start adding tests to `directory/with/tests` and each test should 41 | //! be automatically found and ran by goldentests whenever you run `cargo test`. 42 | //! Here's a quick example of a test file that uses most of goldentest's features: 43 | //! 44 | //! ```python 45 | //! import sys 46 | //! 47 | //! print("hello!\nfriend!") 48 | //! print("error!", file=sys.stderr) 49 | //! sys.exit(3) 50 | //! 51 | //! # Assuming 'python' is the command passed to TestConfig::new: 52 | //! # args: -B 53 | //! # expected exit status: 3 54 | //! # expected stdout: 55 | //! # hello! 56 | //! # friend! 57 | //! 58 | //! # expected stderr: error! 59 | //! ``` 60 | pub mod config; 61 | mod diff_printer; 62 | pub mod error; 63 | mod runner; 64 | 65 | pub use config::TestConfig; 66 | pub use error::TestResult; 67 | -------------------------------------------------------------------------------- /src/main.rs: -------------------------------------------------------------------------------- 1 | mod config; 2 | mod diff_printer; 3 | mod error; 4 | mod runner; 5 | 6 | use crate::config::TestConfig; 7 | use clap::Parser; 8 | use std::path::PathBuf; 9 | 10 | #[derive(Parser, Debug)] 11 | #[clap(author, version, about, long_about = None)] 12 | struct Args { 13 | #[clap(help = "The program to run for each test file")] 14 | binary_path: PathBuf, 15 | 16 | #[clap(help = "The directory to search for test files recursively within, or a single file to test")] 17 | test_path: PathBuf, 18 | 19 | #[clap( 20 | help = "Prefix string for test commands. This is usually the same as the comment syntax in the language you are testing. For example, in C this would be '// '" 21 | )] 22 | test_prefix: String, 23 | 24 | #[clap( 25 | long, 26 | default_value = "args:", 27 | help = "Prefix string for the command line arguments to be passed to the command, before the program file path." 28 | )] 29 | args_prefix: String, 30 | 31 | #[clap( 32 | long, 33 | default_value = "args after:", 34 | help = "Prefix string for the command line arguments to be passed to the command, after the program file path." 35 | )] 36 | args_after_prefix: String, 37 | 38 | #[clap( 39 | long, 40 | default_value = "expected stdout:", 41 | help = "The program to run for each test file" 42 | )] 43 | stdout_prefix: String, 44 | 45 | #[clap( 46 | long, 47 | default_value = "expected stderr:", 48 | help = "The program to run for each test file" 49 | )] 50 | stderr_prefix: String, 51 | 52 | #[clap( 53 | long, 54 | default_value = "expected exit status:", 55 | help = "The program to run for each test file" 56 | )] 57 | exit_status_prefix: String, 58 | 59 | #[clap( 60 | long, 61 | help = "Update the expected output of each test file to match the actual output" 62 | )] 63 | overwrite: bool, 64 | 65 | #[clap(long, help = "Arguments to add before the file name when running every test file")] 66 | base_args: String, 67 | 68 | #[clap(long, help = "Arguments to add after the file name when running every test file")] 69 | base_args_after: String, 70 | } 71 | 72 | fn main() { 73 | let args = Args::parse(); 74 | 75 | let config = TestConfig { 76 | binary_path: args.binary_path, 77 | test_path: args.test_path, 78 | test_line_prefix: args.test_prefix, 79 | test_args_prefix: args.args_prefix, 80 | test_args_after_prefix: args.args_after_prefix, 81 | test_stdout_prefix: args.stdout_prefix, 82 | test_stderr_prefix: args.stderr_prefix, 83 | test_exit_status_prefix: args.exit_status_prefix, 84 | overwrite_tests: args.overwrite, 85 | base_args: args.base_args, 86 | base_args_after: args.base_args_after, 87 | }; 88 | 89 | config.run_tests().unwrap_or_else(|_| std::process::exit(1)); 90 | } 91 | -------------------------------------------------------------------------------- /src/runner.rs: -------------------------------------------------------------------------------- 1 | use crate::{ 2 | config::TestConfig, 3 | diff_printer::DiffPrinter, 4 | error::{InnerTestError, TestResult}, 5 | }; 6 | 7 | use colored::Colorize; 8 | use similar::TextDiff; 9 | 10 | #[cfg(feature = "parallel")] 11 | use rayon::iter::IntoParallelIterator; 12 | #[cfg(feature = "parallel")] 13 | use rayon::iter::ParallelIterator; 14 | 15 | #[cfg(feature = "progress-bar")] 16 | use indicatif::ProgressBar; 17 | 18 | use std::{ 19 | fs::File, 20 | io::{Read, Write}, 21 | path::{Path, PathBuf}, 22 | process::{Command, Output}, 23 | }; 24 | 25 | type InnerTestResult = Result; 26 | 27 | struct Test { 28 | path: PathBuf, 29 | command_line_args: String, 30 | command_line_args_after: String, 31 | expected_stdout: String, 32 | expected_stderr: String, 33 | expected_exit_status: Option, 34 | rest: String, 35 | } 36 | 37 | #[derive(PartialEq)] 38 | enum TestParseState { 39 | Neutral, 40 | ReadingExpectedStdout, 41 | ReadingExpectedStderr, 42 | } 43 | 44 | fn find_tests(test_path: &Path) -> (Vec, Vec) { 45 | let mut tests = vec![]; 46 | let mut errors = vec![]; 47 | 48 | if test_path.is_dir() { 49 | let read_dir = match std::fs::read_dir(test_path) { 50 | Ok(dir) => dir, 51 | Err(err) => return (tests, vec![InnerTestError::IoError(test_path.to_owned(), err)]), 52 | }; 53 | 54 | for entry in read_dir { 55 | let path = match entry { 56 | Ok(entry) => entry.path(), 57 | Err(err) => { 58 | errors.push(InnerTestError::IoError(test_path.to_owned(), err)); 59 | continue; 60 | } 61 | }; 62 | 63 | if path.is_dir() { 64 | let (mut more_tests, mut more_errors) = find_tests(&path); 65 | tests.append(&mut more_tests); 66 | errors.append(&mut more_errors); 67 | } else { 68 | tests.push(path); 69 | } 70 | } 71 | } else { 72 | tests.push(test_path.into()); 73 | } 74 | 75 | (tests, errors) 76 | } 77 | 78 | fn strip_prefix<'a>(s: &'a str, prefix: &str) -> &'a str { 79 | s.strip_prefix(prefix).unwrap_or(s) 80 | } 81 | 82 | fn append_line(s: &mut String, line: &str) { 83 | *s += line; 84 | *s += "\n"; 85 | } 86 | 87 | fn parse_test(test_path: &Path, config: &TestConfig) -> InnerTestResult { 88 | let mut command_line_args = String::new(); 89 | let mut command_line_args_after = String::new(); 90 | let mut expected_stdout = String::new(); 91 | let mut expected_stderr = String::new(); 92 | let mut expected_exit_status = None; 93 | let mut rest = String::new(); 94 | 95 | let mut file = File::open(test_path).map_err(|err| InnerTestError::IoError(test_path.to_owned(), err))?; 96 | let mut contents = String::new(); 97 | file.read_to_string(&mut contents) 98 | .map_err(|err| InnerTestError::IoError(test_path.to_owned(), err))?; 99 | 100 | let mut state = TestParseState::Neutral; 101 | for line in contents.lines() { 102 | if line.starts_with(&config.test_line_prefix) { 103 | // If we're currently reading stdout or stderr, append the line to the expected output 104 | if state == TestParseState::ReadingExpectedStdout { 105 | append_line(&mut expected_stdout, strip_prefix(line, &config.test_line_prefix)) 106 | } else if state == TestParseState::ReadingExpectedStderr { 107 | append_line(&mut expected_stderr, strip_prefix(line, &config.test_line_prefix)); 108 | 109 | // Otherwise, look to see if the line begins with a keyword and if so change state 110 | // (stdout/stderr) or parse an argument to the keyword (args/exit status). 111 | 112 | // args: 113 | } else if line.starts_with(&config.test_args_prefix) { 114 | command_line_args = strip_prefix(line, &config.test_args_prefix).to_string(); 115 | 116 | // args after: 117 | } else if line.starts_with(&config.test_args_after_prefix) { 118 | command_line_args_after = strip_prefix(line, &config.test_args_after_prefix).to_string(); 119 | 120 | // expected stdout: 121 | } else if line.starts_with(&config.test_stdout_prefix) { 122 | state = TestParseState::ReadingExpectedStdout; 123 | // Append the remainder of the line to the expected stdout. 124 | // Both expected_stdout and expected_stderr are trimmed so it 125 | // has no effect if the rest of this line is empty 126 | append_line(&mut expected_stdout, strip_prefix(line, &config.test_stdout_prefix)); 127 | 128 | // expected stderr: 129 | } else if line.starts_with(&config.test_stderr_prefix) { 130 | state = TestParseState::ReadingExpectedStderr; 131 | append_line(&mut expected_stderr, strip_prefix(line, &config.test_stderr_prefix)); 132 | 133 | // expected exit status: 134 | } else if line.starts_with(&config.test_exit_status_prefix) { 135 | let status = strip_prefix(line, &config.test_exit_status_prefix).trim(); 136 | expected_exit_status = Some(status.parse().map_err(|err| { 137 | InnerTestError::ErrorParsingExitStatus(test_path.to_owned(), status.to_owned(), err) 138 | })?); 139 | } else { 140 | append_line(&mut rest, line); 141 | } 142 | } else { 143 | // Both expected_stdout and expected_stderr need a blank line at the end, 144 | // the order here implicitly skips that newline. 145 | if state == TestParseState::Neutral { 146 | append_line(&mut rest, line); 147 | } 148 | state = TestParseState::Neutral; 149 | } 150 | } 151 | 152 | // Remove \r from strings for windows compatibility. This means we 153 | // also can't test for any string containing "\r" unless this check 154 | // is improved to be more clever (e.g. only removing at the end of a line). 155 | let expected_stdout = expected_stdout.replace("\r", ""); 156 | let expected_stderr = expected_stderr.replace("\r", ""); 157 | 158 | Ok(Test { 159 | path: test_path.to_owned(), 160 | command_line_args, 161 | command_line_args_after, 162 | expected_stdout, 163 | expected_stderr, 164 | expected_exit_status, 165 | rest, 166 | }) 167 | } 168 | 169 | fn write_expected_output_for_stream( 170 | file: &mut File, 171 | prefix: &str, 172 | marker: &str, 173 | expected: &[u8], 174 | ) -> std::io::Result<()> { 175 | // Doesn't handle \r correctly! 176 | // Strip leading and trailing newlines from the output 177 | let expected_stdout = String::from_utf8_lossy(expected).replace("\r", ""); 178 | let lines: Vec<&str> = expected_stdout.trim().split('\n').collect(); 179 | match lines.len() { 180 | // Don't write if there's nothing to write 181 | 0 => Ok(()), 182 | 1 if lines[0].len() == 0 => Ok(()), 183 | // If the line is short and nice, write that line 184 | 1 if lines[0].len() < 80 => { 185 | write!(file, "{} ", marker)?; 186 | file.write_all(expected)?; 187 | writeln!(file, "") 188 | } 189 | // Otherwise we write it more longform 190 | _ => { 191 | writeln!(file, "{}", marker)?; 192 | for line in lines { 193 | file.write_all(prefix.as_bytes())?; 194 | file.write_all(line.as_bytes())?; 195 | writeln!(file, "")?; 196 | } 197 | writeln!(file, "") 198 | } 199 | } 200 | } 201 | 202 | fn overwrite_test(test_path: &PathBuf, config: &TestConfig, output: &Output, test: &Test) -> std::io::Result<()> { 203 | // Maybe copy the file so we don't remove it if we fail here? 204 | let mut file = File::create(test_path)?; 205 | 206 | file.write_all(test.rest.trim_end().as_bytes())?; 207 | writeln!(file, "")?; 208 | writeln!(file, "")?; 209 | 210 | if !test.command_line_args.is_empty() { 211 | writeln!(file, "{} {}", config.test_args_prefix, test.command_line_args.trim())?; 212 | } 213 | 214 | if !test.command_line_args_after.is_empty() { 215 | writeln!( 216 | file, 217 | "{} {}", 218 | config.test_args_after_prefix, 219 | test.command_line_args_after.trim() 220 | )?; 221 | } 222 | 223 | if Some(0) != output.status.code() { 224 | writeln!( 225 | file, 226 | "{} {}", 227 | config.test_exit_status_prefix, 228 | output.status.code().unwrap_or(0) 229 | )?; 230 | } 231 | 232 | write_expected_output_for_stream( 233 | &mut file, 234 | &config.test_line_prefix, 235 | &config.test_stdout_prefix, 236 | &output.stdout, 237 | )?; 238 | write_expected_output_for_stream( 239 | &mut file, 240 | &config.test_line_prefix, 241 | &config.test_stderr_prefix, 242 | &output.stderr, 243 | ) 244 | } 245 | 246 | /// Diff the given "stream" and expected contents of the stream. 247 | /// Returns non-zero on error. 248 | fn check_for_differences_in_stream(name: &str, stream: &[u8], expected: &str, errors: &mut Vec) { 249 | let output_string = String::from_utf8_lossy(stream).replace("\r", ""); 250 | let output = output_string.trim(); 251 | let expected = expected.trim(); 252 | 253 | let differences = TextDiff::from_lines(expected, output); 254 | if differences.ratio() != 1.0 { 255 | errors.push(format!( 256 | "Actual {} differs from expected {}:\n{}", 257 | name, 258 | name, 259 | DiffPrinter(differences) 260 | )); 261 | } 262 | } 263 | 264 | fn check_exit_status(output: &Output, expected_status: Option, errors: &mut Vec) { 265 | if let Some(expected_status) = expected_status { 266 | if let Some(actual_status) = output.status.code() { 267 | if expected_status != actual_status { 268 | errors.push(format!( 269 | "Expected an exit status of {} but process returned {}\n", 270 | expected_status, actual_status, 271 | )); 272 | } 273 | } else { 274 | errors.push(format!( 275 | "Expected an exit status of {} but process was terminated by signal instead\n", 276 | expected_status 277 | )); 278 | } 279 | } 280 | } 281 | 282 | fn check_for_differences(path: &Path, output: &Output, test: &Test) -> InnerTestResult<()> { 283 | let mut errors = vec![]; 284 | check_exit_status(output, test.expected_exit_status, &mut errors); 285 | check_for_differences_in_stream("stdout", &output.stdout, &test.expected_stdout, &mut errors); 286 | check_for_differences_in_stream("stderr", &output.stderr, &test.expected_stderr, &mut errors); 287 | 288 | if errors.is_empty() { 289 | Ok(()) 290 | } else { 291 | let path = path.to_owned(); 292 | Err(InnerTestError::TestFailed { path, errors }) 293 | } 294 | } 295 | 296 | #[cfg(feature = "parallel")] 297 | fn into_iter(value: T) -> T::Iter { 298 | value.into_par_iter() 299 | } 300 | 301 | #[cfg(not(feature = "parallel"))] 302 | fn into_iter(value: T) -> T::IntoIter { 303 | value.into_iter() 304 | } 305 | 306 | impl TestConfig { 307 | fn test_all(&self, test_sources: Vec) -> Vec> { 308 | #[cfg(feature = "progress-bar")] 309 | let progress = ProgressBar::new(test_sources.len() as u64); 310 | 311 | let results = into_iter(test_sources) 312 | .map(|file| { 313 | #[cfg(feature = "progress-bar")] 314 | progress.inc(1); 315 | let test = parse_test(&file, self)?; 316 | 317 | let mut args = Self::split_args(&self.base_args, &file)?; 318 | args.extend(Self::split_args(&test.command_line_args, &file)?); 319 | 320 | args.push(test.path.to_string_lossy().to_string()); 321 | 322 | args.extend(Self::split_args(&self.base_args_after, &file)?); 323 | args.extend(Self::split_args(&test.command_line_args_after, &file)?); 324 | 325 | let mut command = Command::new(&self.binary_path); 326 | command.args(args); 327 | let output = 328 | command.output().map_err(|err| InnerTestError::CommandError(file.clone(), command, err))?; 329 | 330 | let differences = check_for_differences(&test.path, &output, &test); 331 | if self.overwrite_tests { 332 | if let Err(InnerTestError::TestFailed { path, errors }) = differences { 333 | overwrite_test(&file, self, &output, &test) 334 | .map_err(|err| InnerTestError::IoError(file.to_owned(), err))?; 335 | 336 | return Err(InnerTestError::TestUpdated { path, errors }); 337 | } 338 | } 339 | differences 340 | }) 341 | .collect(); 342 | 343 | #[cfg(feature = "progress-bar")] 344 | progress.finish_and_clear(); 345 | results 346 | } 347 | 348 | /// Splits a string into separate command-line args. 349 | /// Usually this means separating by spaces. 350 | fn split_args(s: &str, file: &Path) -> Result, InnerTestError> { 351 | shlex::split(s).ok_or_else(|| InnerTestError::ErrorParsingArgs(file.to_path_buf(), s.to_owned())) 352 | } 353 | 354 | /// Recurse through all the files in self.path, parse them all, 355 | /// and run the target program with the arguments specified in the file. 356 | pub fn run_tests(&self) -> TestResult<()> { 357 | let (tests, path_errors) = find_tests(&self.test_path); 358 | let outputs = self.test_all(tests); 359 | 360 | for error in path_errors { 361 | eprintln!("{}", error); 362 | } 363 | 364 | let total_tests = outputs.len(); 365 | let mut failing_tests = 0; 366 | let mut can_be_fixed_with_overwrite_tests = 0; 367 | let mut updated_tests = 0; 368 | for result in &outputs { 369 | match result { 370 | Ok(_) => {} 371 | Err(InnerTestError::TestUpdated { .. }) => { 372 | updated_tests += 1; 373 | } 374 | 375 | Err(InnerTestError::TestFailed { .. }) => { 376 | can_be_fixed_with_overwrite_tests += 1; 377 | failing_tests += 1; 378 | } 379 | 380 | Err( 381 | InnerTestError::IoError(_, _) 382 | | InnerTestError::CommandError(_, _, _) 383 | | InnerTestError::ErrorParsingExitStatus(_, _, _) 384 | | InnerTestError::ErrorParsingArgs(_, _), 385 | ) => { 386 | failing_tests += 1; 387 | } 388 | } 389 | 390 | if let Err(err) = result { 391 | eprintln!("{}", err) 392 | } 393 | } 394 | 395 | if !self.overwrite_tests { 396 | println!( 397 | "ran {} {} tests with {} and {}\n", 398 | total_tests, 399 | "golden".bright_yellow(), 400 | format!("{} passing", total_tests - failing_tests).green(), 401 | format!("{} failing", failing_tests).red(), 402 | ); 403 | } else { 404 | println!( 405 | "ran {} {} tests with {}, {} and {}\n", 406 | total_tests, 407 | "golden".bright_yellow(), 408 | format!("{} passing", total_tests - failing_tests).green(), 409 | format!("{} failing", failing_tests).red(), 410 | format!("{} updated", updated_tests).cyan(), 411 | ); 412 | } 413 | 414 | if can_be_fixed_with_overwrite_tests > 0 { 415 | println!("Looks like you have failing tests. Review the output of each and fix any unexpected differences. When finished, you can use the --overwrite flag to automatically write the new output to the {} failing test file(s)", can_be_fixed_with_overwrite_tests); 416 | } 417 | 418 | if failing_tests != 0 { 419 | Err(()) 420 | } else { 421 | Ok(()) 422 | } 423 | } 424 | } 425 | -------------------------------------------------------------------------------- /tests/tests.rs: -------------------------------------------------------------------------------- 1 | use goldentests::{TestConfig, TestResult}; 2 | 3 | #[test] 4 | fn run_goldentests_example() -> TestResult<()> { 5 | let config = TestConfig::new("python", "examples", "# "); 6 | config.run_tests() 7 | } 8 | --------------------------------------------------------------------------------