├── .gitignore ├── LICENSE ├── Modules ├── I-Performance-testing-principles │ ├── 01-Introduction-to-Performance-Testing.md │ ├── 02-Frontend-vs-backend-performance-testing.md │ ├── 03-Load-Testing.md │ └── 04-High-level-overview-of-the-load-testing-process.md ├── II-k6-Foundations │ ├── 01-Getting-started-with-k6-OSS.md │ ├── 02-The-k6-CLI.md │ ├── 03-Understanding-k6-results.md │ ├── 04-Adding-checks-to-your-script.md │ ├── 05-Adding-think-time-using-sleep.md │ ├── 06-k6-Load-Test-Options.md │ ├── 07-Setting-test-criteria-with-thresholds.md │ ├── 08-k6-results-output-options.md │ └── 09-Recording-a-k6-script.md ├── III-k6-Intermediate │ ├── 01-How-to-debug-k6-load-testing-scripts.md │ ├── 02-Dynamic-correlation-in-k6.md │ ├── 03-Workload-modeling.md │ ├── 04-Adding-test-data.md │ ├── 05-Parallel-requests-in-k6.md │ ├── 06-Organizing-code-in-k6-by-transaction_groups-and-tags.md │ ├── 07-Setup-and-Teardown-functions.md │ ├── 08-Setting-load-profiles-with-executors.md │ ├── 08-Setting-load-profiles-with-executors │ │ ├── Constant-Arrival-Rate-Exercises.md │ │ ├── Constant-VUs-Exercises.md │ │ ├── Externally-Controlled-Exercises.md │ │ ├── Per-VU-Iterations-Exercises.md │ │ ├── Ramping-Arrival-Rate-Exercises.md │ │ ├── Ramping-VUs-Exercises.md │ │ └── Shared-Iterations-Exercises.md │ ├── 09-Workload-modeling-with-scenarios.md │ ├── 10-Using-execution-context-variables.md │ └── 11-Creating-and-using-custom-metrics.md └── XX-Future-Ideas │ ├── Additional-protocols.md │ ├── Advanced-curriculum.md │ ├── Analyzing-results-on-k6-Cloud.md │ ├── Azure-DevOps.md │ ├── Best-practices-for-designing-realistic-k6-scripts.md │ ├── Caching-options.md │ ├── Choosing-a-load-profile-on-k6-Cloud.md │ ├── Choosing-a-load-testing-tool.md │ ├── Circle-CI.md │ ├── Clarifying-testing-criteria.md │ ├── Continuous-load-testing-with-k6-Cloud.md │ ├── Creating-a-script-using-the-Test-Builder.md │ ├── GitHub-Actions.md │ ├── GitLab.md │ ├── How-to-do-chaos-testing-with-k6.md │ ├── How-to-make-a-k6-extension.md │ ├── How-to-use-the-k6-operator-for-Kubernetes.md │ ├── How-to-use-xk6-browser.md │ ├── Integrating-k6-with-Grafana-and-Prometheus.md │ ├── Load-tests-as-code-and-shift-left-testing.md │ ├── Modular-scripting.md │ ├── Observability-and-performance-testing.md │ ├── Overview-of-k6-Cloud.md │ ├── Parameters-of-a-load-test.md │ ├── Performance-automation.md │ ├── Performance-test-cases.md │ ├── Performance-testing-methodologies.md │ ├── The-automation-pyramid.md │ ├── The-k6-Cloud-interface.md │ ├── Using-a-proxy-with-k6.md │ ├── Using-k6-OSS-with-k6-Cloud.md │ ├── What-is-continuous-load-testing-and-why-should-you-do-it.md │ ├── k6-OSS-vs-k6-Cloud.md │ ├── k6-Use-Cases.md │ ├── k6-and-Netdata.md │ └── k6-and-New-Relic.md ├── README.md ├── Suggested Formats.md ├── images ├── 52qQi-marketing-strategies-app-and-mobile-page-load-time-statistics-downlo.jpg ├── LoadPartOfPerf.png ├── MvsM.jpg ├── Scenarios.svg ├── WvsA.png ├── backend-component.png ├── bert-relaxing.png ├── browser-extension-icon.png ├── continuous-testing-snowball.png ├── definitions-performance.png ├── definitions-test.png ├── frontend-backend.png ├── frontend-limitations.png ├── frontend-performance.png ├── grafana-cloud-k6.png ├── installation-page.png ├── k6-browser-recorder-01.png ├── k6-browser-recorder-02.png ├── k6-browser-recorder-3.png ├── k6-browser-recorder-4.png ├── k6-browser-recorder-5.png ├── k6-cloud-script-editor-from-recording.png ├── k6-cloud-script-editor.png ├── k6-end-of-summary.png ├── k6-order-of-preference-settings.png ├── k6-output-options.png ├── load_profile-constant.png.png ├── load_profile-no_ramp-up_or_ramp-down 1.png ├── load_profile-no_ramp-up_or_ramp-down.png ├── load_profile-no_ramp-up_or_ramp-down.png.orig ├── load_profile-ramp-up.png.png ├── load_profile-steady_state.png ├── office-hours-executors-k6.png ├── parallel-requests.png ├── scenarios-shakeout.png ├── test-scenario-average.png ├── test-scenario-soak.png ├── test-scenario-spike-test.png ├── test-scenario-stress.png ├── test-scenarios-breakpoint.png ├── week-of-testing-day4-youtube.png └── week-of-testing-youtube.png ├── index.html ├── package-lock.json ├── package.json ├── reveal.js ├── slides ├── 01-introduction-to-performance-testing │ └── slides.md ├── 02-frontend-vs-backend-performance-testing │ └── slides.md ├── 03-load-testing │ └── slides.md ├── 04-getting-started-with-k6-oss │ └── slides.md ├── 05-the-k6-cli │ └── slides.md ├── 06-understanding-k6-results │ └── slides.md ├── 07-adding-checks-to-your-script │ └── slides.md ├── 08-adding-think-time │ └── slides.md ├── 09-load-test-options │ └── slides.md ├── 10-setting-test-criteria-with-thresholds │ └── slides.md ├── 11-k6-results-output-options │ └── slides.md ├── end │ └── slides.md └── intro │ └── slides.md └── vite.config.js /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | .obsidian 3 | .trash 4 | templates 5 | 6 | # Logs 7 | logs 8 | *.log 9 | npm-debug.log* 10 | yarn-debug.log* 11 | yarn-error.log* 12 | pnpm-debug.log* 13 | lerna-debug.log* 14 | 15 | node_modules 16 | dist 17 | dist-ssr 18 | *.local 19 | 20 | # Editor directories and files 21 | .vscode/* 22 | !.vscode/extensions.json 23 | .idea 24 | .DS_Store 25 | *.suo 26 | *.ntvs* 27 | *.njsproj 28 | *.sln 29 | *.sw? 30 | -------------------------------------------------------------------------------- /Modules/I-Performance-testing-principles/01-Introduction-to-Performance-Testing.md: -------------------------------------------------------------------------------- 1 | # What is performance testing? 2 | 3 | The primary concern of performance testing is _how well_ a system works. Unlike functional testing, which tests _whether_ a system works, performance testing seeks to measure qualitative aspects of a user's experience of a system, such as its responsiveness and reliability. 4 | 5 | ## Why should we do performance testing? 6 | 7 | With performance testing, your team can: 8 | 9 | - **Improve user experience.** Identify potential bottlenecks and issues early in the development process. Performance testing provides a complete picture of what the experience of a user accessing your application is like, beyond just the application functionality. 10 | - **Prepare for unexpected demand.** Test beyond expected load to find the breaking points of the application and formulate better procedures for responding to and capitalizing on unprecedented success. 11 | - **Increase confidence in the application.** Lower the overall risk of failure with systematic performance testing. This reduced risk also builds team confidence. Teams can work better knowing their application can withstand unexpected conditions in production. 12 | - **Assess and optimize infrastructure.** Reduce unnecessary infrastructure costs without compromising performance. Simulate scenarios to observe horizontal and vertical scaling, and run experiments to verify the resources that the system under test actually requires. 13 | 14 | If performance testing is so valuable, why don't more teams do it? 15 | 16 | ## Common excuses for not performance testing 17 | 18 | These concerns are often rooted in misconceptions about the necessary cost and complexity of testing performance. 19 | 20 | ### Our application is too small 21 | 22 | The idea that only larger corporations or more complex applications require performance testing stems mostly from the misconception that performance testing needs to simulate hundreds or even thousands of users. In fact, measuring how a system performs with just a handful of users can bring significant benefits. Even simulating a single user could highlight bottlenecks in the application that would not have otherwise been spotted. Costly performance inefficiencies can also exist in small systems. 23 | 24 | ### It's expensive or time-consuming 25 | 26 | Performance testing *can* be expensive and time-consuming, but teams can pick and choose the type of activities that fall within their budgets for cost and time. The cost of _not_ performance testing is often far greater than the initial investment in some performance-testing practices. 27 | 28 | ### It requires extensive technical knowledge 29 | 30 | Different types of performance testing require more different degrees of technical knowledge. On its face, however, performance testing is no more or less complex than other forms of testing. Teams can choose from a spectrum of performance-testing activities according to their appetite for complexity. Accessing a web page while looking at timings from the Network panel of DevTools within a browser is a type of performance testing that adds immediate value for little effort. 31 | 32 | ### We don't have a performance environment 33 | 34 | Performance testing doesn't always mean load testing, and even load testing doesn't always involve stressing an application to its breaking point. Some ways to assess performance don't require dedicated performance-testing environments, for example: 35 | - Unit tests for performance during development 36 | - API tests during System testing 37 | - And synthetic monitoring or low-load tests in production. 38 | 39 | ### Observability trumps performance testing 40 | 41 | Having a mature observability platform encourages many to forego performance testing in favor of monitoring application performance in production. However, the efficacy of observability depends on having data to observe, and without the ability to generate data artificially, application performance is often observed only after bottlenecks are already live. 42 | 43 | With performance testing, teams can simulate rich user scenarios *before* potential performance issues are released to production, making observability useful in test environments as well. Some types of tests, such as disaster recovery, chaos engineering, and reliability testing, also help teams prepare for inevitable failures. 44 | 45 | Both performance testing and observability are essential components to improving the quality of a system. 46 | 47 | ### The cloud is infinite; we can always scale up 48 | 49 | Now that many applications are hosted in the cloud, it can be tempting to think that horizontal or vertical scaling negates the need to test performance. However, this belief can lead to significantly increased costs if efficiency is not taken into account. 50 | 51 | Scalability is only *one* aspect of application performance that should be tested. Even efficiently scaled systems can be slow or prone to failure and outages that render the application unusable. In fact, scaling systems up can exacerbate some types of cascading failures, like retry storms and the thundering-herd problem. 52 | 53 | 54 | ## Test your knowledge 55 | 56 | ### Question 1 57 | 58 | Which of the following are *not* reasons why we should do performance testing? 59 | 60 | A: We want to make using our application a more pleasant experience for our customers. 61 | 62 | B: Who knows what could happen in production? We want to be ready for anything. 63 | 64 | C: We want to replace our observability stack with a more proactive testing suite. 65 | 66 | ### Question 2 67 | 68 | Someone tells you performance testing is too expensive. What's a good way to respond? 69 | 70 | A: More and more professionals are adding performance testing to their CV, so we may be able to hire someone for cheap. 71 | 72 | B: Recording a script and replaying it costs very little, so we can just do that. 73 | 74 | C: Performance defects are potentially more expensive to fix after they are deployed to production. 75 | 76 | D: A and B. 77 | 78 | ### Question 3 79 | 80 | Which of the following statements is true? 81 | 82 | A: Performance testing is always about generating high user load or traffic against application servers. 83 | 84 | B: Performance testing is an activity that requires specialized expertise to carry out. 85 | 86 | C: Performance testing requires a production-like environment. 87 | 88 | D: Performance testing improves overall team morale by building confidence in what the application can withstand. 89 | 90 | ### Answers 91 | 92 | 1. C. Performance testing and observability complement each other. They do not replace each other, and instead work together to improve confidence in a system. 93 | 2. C. Performance bottlenecks and other issues often cost more to fix when identified in production. Proactive performance testing can more than offset its cost when it identifies and fixes these defects earlier on in development. 94 | 3. D. A is incorrect because performance testing can be done at lower load levels. B is incorrect because anyone can do performance testing, not just specialized performance testers. C is incorrect because performance testing can be done in development, staging, and test environments as well. 95 | 96 | -------------------------------------------------------------------------------- /Modules/I-Performance-testing-principles/02-Frontend-vs-backend-performance-testing.md: -------------------------------------------------------------------------------- 1 | For a holistic view of performance, testers need to test both the front and back ends of an application. Though there is some overlap in the tools and techniques, the approach and focus differs when testing different application parts. 2 | 3 | ## Frontend performance testing 4 | 5 | Frontend performance testing verifies application performance on the interface level, measuring round-trip metrics that consider how and when page elements appear on the screen. It is concerned with the end-user experience of an application, usually involving a browser. 6 | 7 | Frontend performance testing excels at identifying issues on a micro level but does not expose issues in the underlying architecture of a system. 8 | 9 | Because it primarily measures a single user's experience of the system, frontend performance testing tends to be easier to carry out on a small scale. 10 | Frontend performance testing has metrics that are distinct from backend performance testing. Frontend performance tests for things like: 11 | - Whether the pages of the application are optimized to render quickly on a user's screen 12 | - How long it takes a user to interact with the UI elements of the application. 13 | 14 | Some concerns when doing this type of performance testing are its dependency on fully integrated environments and the cost of scaling. You can test frontend performance only once the application code and infrastructure have been integrated with a user interface. Tools to automate frontend testing are also inherently more resource-intensive, so they can be costly to run at scale and are not suitable for high load tests. 15 | 16 | 17 | ### Why isn't frontend performance testing enough? 18 | 19 | Since frontend performance testing already measures end-user experience, why do we even need backend performance testing? 20 | 21 | Frontend testing tools are executed on the client side and are limited in scope. They do not provide enough information about backend components for fine-tuning beyond the user interface. 22 | 23 | This limitation can lead to false confidence in overall application performance when the amount of traffic against an application increases. While the frontend component of response time remains more or less constant, the backend component of response time increases exponentially with the number of concurrent users: 24 | 25 | ![A chart aggregating front and backend response times. As concurrency increases, backend response time becomes much longer.](../../images/frontend-backend.png) 26 | 27 | Testing only frontend performance ignores a large part of the application, one more susceptible to increased failures and performance bottlenecks at higher levels of load. 28 | 29 | 30 | ## Backend performance testing 31 | 32 | Backend performance testing targets the underlying application servers to verify the scalability, elasticity, availability, reliability, and responsiveness of a system as a whole. 33 | 34 | - *Scalability*: Can the system adjust to steadily increasing levels of demand? 35 | - *Elasticity*: Can the system conserve resources during periods of lower demand? 36 | - *Availability*: What is the uptime of each of the components in the system? 37 | - *Reliability*: Does the system respond consistently in different environmental conditions? 38 | - *Resiliency*: Can the system gracefully withstand unexpected events? 39 | - *Latency*: How quickly does the system process and respond to requests? 40 | 41 | Backend testing is broader in scope than frontend performance testing. API testing can be used to target specific components or integrated components, meaning that application teams have more flexibility and higher chances of finding performance issues earlier. Backend testing is less resource-intensive than frontend performance testing and is thus more suitable for generating high load. 42 | 43 | Some concerns when doing this type of testing are its inability to test "the first mile" of user experience and breadth. Backend testing involves messaging at the protocol level rather than interacting with page elements. It verifies the foundation of an application rather than the highest layer of it that a user ultimately sees. Depending on the complexity of the application architecture, backend testing may also be more expansive in scope. 44 | 45 | ## Test your knowledge 46 | 47 | ### Question 1 48 | 49 | Which type of testing does k6 excel in? 50 | 51 | A: Backend testing 52 | 53 | B: Manual testing 54 | 55 | C: Accessibility testing 56 | 57 | ### Question 2 58 | 59 | Which of the following is an advantage of backend performance testing? 60 | 61 | A: It provides metrics like Time To Interactive (TTI) that measure when users can first interact with the application. 62 | 63 | B: It simulates users by driving real browsers to test the application. 64 | 65 | C: It can target application components before they're integrated. 66 | 67 | D: A and B. 68 | 69 | ### Question 3 70 | 71 | Which of the following statements is true? 72 | 73 | A: Frontend performance testing results are not affected by bottlenecks in the backend of an application. 74 | 75 | B: Frontend performance testing is always carried out with a single automated user. 76 | 77 | C: Frontend performance testing can verify user experience in ways that backend performance testing cannot. 78 | 79 | ### Answers 80 | 81 | 1. A. k6 cannot do either manual or accessibility testing at this point. 82 | 2. C. A and B are incorrect because they are referring to frontend performance testing. Backend performance testing does not provide frontend metrics like TTI and it does not interact with applications through a browser. 83 | 3. C. A is incorrect because backend performance bottlenecks can also affect frontend performance. B is incorrect because frontend testing scripts can be executed as part of a load test. C is correct because frontend performance testing measures a real user's experience from the browser, which is not something backend testing can measure. 84 | -------------------------------------------------------------------------------- /Modules/I-Performance-testing-principles/04-High-level-overview-of-the-load-testing-process.md: -------------------------------------------------------------------------------- 1 | # High-level overview of the load testing process 2 | 3 | In the last section, you learned what load testing is, how it differs from performance testing, and what its test scenario types are. In this section, you'll learn about the different phases of the load testing process: 4 | 5 | - Planning for load testing 6 | - Scripting a load test 7 | - Executing load tests 8 | - Analysis of load testing results 9 | 10 | For clarity, these activities are shown as distinct phases here; however, in practice, they often overlap. Like the application release process, the testing process should be continuous and Agile, with each small increment building upon previous work, growing more and more robust over time. 11 | 12 | ![Continuous Testing Snowball](../../images/continuous-testing-snowball.png) 13 | 14 | 15 | Just like application code, a test starts slowly. At the mountain's peak, the test is in its simplest form. You can think of an early-stage test as primarily **risk-based**: testing at this level is a reaction to probable failures. These tests are written specifically to prevent failure in an application's most critical components, and there might not be enough time for anything else. 16 | 17 | As testing matures within the project, it may start to include **regression**. Now that all the high-risk areas are covered, teams have time to make tests a bit more backward-compatible and preventative, and they write tests to see if new code breaks past functionalities. 18 | 19 | The Continuous Testing Snowball gathers speed as the team grows along with the test suite, focusing on **automating** more and more parts of the testing process with every iteration or sprint. A repeatable framework is established. In this stage, the snowball is at its fastest speed. 20 | 21 | At the bottom of the slope, continuous testing has evolved testing to such an extent that the team can expand its focus from addressing specific defects to increasing overall **reliability and confidence** in the application's ability to withstand unexpected events. The framework is made even more robust with more exploratory types of testing like Chaos Engineering. 22 | 23 | The focus of Continuous Testing is on organically and iteratively evolving the testing suite in parallel with the application, starting with something small with the goal of making enough incremental changes until it is more robust. 24 | 25 | 26 | ## Planning for load testing 27 | 28 | Planning for a load test is the first part of the process. It involves identifying the reasons _why_ to test, finding _what_ to test, and outlining _how_, generally, to test it. 29 | 30 | In this phase, we formulate requirements for load testing: 31 | - Clarify the scope of testing 32 | - Define SLOs 33 | - Identify workload models 34 | - Set up an environment for testing, including monitoring 35 | - Agree on the frequency and schedule of tests 36 | - Prepare test data, if applicable 37 | 38 | Planning for any testing is a team activity, and load testing is no exception. This phase is an opportunity for all stakeholders to get together and understand what testing should look like and what would define a successful round of testing. 39 | 40 | ## Scripting a load test 41 | 42 | Scripting the load testing script involves translating the test plan into executable tests. In this phase, you might do some of the following: 43 | - Create test scenarios that adequately cover the requirements 44 | - Write test scripts using load testing tools 45 | - Make scripts realistic 46 | - Run shakeout tests to verify that the script works as expected 47 | - Run tests against upstream environments, usually dev or staging 48 | - Share test scripts with the team 49 | - Set up a testing framework that can grow with the test suite 50 | 51 | While tests may be executed while scripting, they are usually for debugging or shakeout purposes rather than full load tests. The scripting phase may spill into the test-execution phase as changes are made to existing scripts or new scripts are made to address issues found during test execution. 52 | 53 | ## Executing load tests 54 | 55 | During test execution, the load testing scripts run against their intended targets, often test environments or production. In the test execution phase, you might do some of the following: 56 | - Set up cloud or on-premise infrastructure 57 | - Set up observability tools to monitor application server and load-generator health 58 | - Run shakeout tests to verify that test environments work as expected 59 | - Run baseline tests to understand current system behavior 60 | - Make changes to the code or environment and run tests to compare against the baseline test 61 | - Increase the scope of tests 62 | - Do distributed testing by ramping up or scaling out the load test 63 | 64 | Executing load tests involves more than just running tests, because the tests themselves may not be helpful without proper observability tools in place to monitor the health of the system and the health of the test. 65 | 66 | ## Analysis of load testing results 67 | 68 | During the analysis phase, you may do some of the following: 69 | - Collate data from application servers as well as from load generators 70 | - Aggregate data to get a big-picture understanding of what happened during the test 71 | - Use data visualization and analysis tools to determine how the system behaved under test conditions 72 | - Report findings to stakeholders 73 | - Remediate bottlenecks 74 | - Rerun tests to confirm issues or fixes 75 | 76 | Using the data collected during the test to understand how the system behaves during different test conditions is a big part of what makes load testing valuable. It's important to examine the data objectively and to communicate the results to the team in a comprehensible way. 77 | 78 | ## Continuous load testing 79 | 80 | Continuous load testing is a practice that spans all the testing phases. In continuous testing, you might: 81 | - Add load testing scripts into a version-controlled repository 82 | - Incorporate load tests into a CI/CD pipeline 83 | - Automate reporting 84 | - Set up notifications for failed tests 85 | - Set up a repeatable framework for running load tests, one tied with code changes or release cycles 86 | - Capture test results in a database to view historical trends 87 | 88 | Without ensuring that load testing is done continuously, load testing can become a one-off process. Incorporating it into existing CI/CD pipelines keeps performance front-of-mind for everyone involved. 89 | 90 | As the testing suite grows in maturity and scope, teams should naturally create more efficient frameworks for running tests, managing notifications, analyzing results, and reporting. In this way, testing can start very simply, usually around the most critical or high-risk functionalities of the application, then evolve and improve organically along with the application. 91 | 92 | ## Test your knowledge 93 | 94 | ### Question 1 95 | 96 | Which of the following statements is true? 97 | 98 | A: Doing the activities is more important than having a clear distinction between phases of the load testing process. 99 | 100 | B: The phases of the load testing process must always be done sequentially, so as not to miss any vital activities. 101 | 102 | C: Planning for a load test is best done by test managers, who are in the best situation to understand what is required from load testing. 103 | 104 | ### Question 2 105 | 106 | When is the best time to start thinking about load testing and observability tools? 107 | 108 | A: Planning 109 | 110 | B: Scripting 111 | 112 | C: Execution 113 | 114 | ### Question 3 115 | 116 | Why is continuous load testing important? 117 | 118 | A: Automating load testing verifies system behavior over time. 119 | 120 | B: Incorporating load testing into CI/CD pipelines means it's not necessary to hire specialized load testers. 121 | 122 | C: Running load tests automatically stops developers from checking in code that is not performant. 123 | 124 | ### Answers 125 | 126 | 1. A. While it can be useful to think of distinct phases in testing to understand the process, in practice, testing should be tightly integrated with other activities in the software development lifecycle. 127 | 2. A. The testing and observability stack selected can have an impact on the type of testing you do and your testing results, so we recommend you consider your options as early as possible. 128 | 3. A. Incorporating load testing into continuous integration pipelines helps you see trends in performance over time. 129 | 130 | -------------------------------------------------------------------------------- /Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md: -------------------------------------------------------------------------------- 1 | # Getting Started with k6 OSS 2 | 3 | There are many ways to start scripting with k6, but we're starting with [k6 OSS](https://github.com/grafana/k6) for a few reasons: 4 | - It is a fully-fledged load testing tool on its own, and it doesn't require a subscription or any payment to use. 5 | - k6 Cloud, the SaaS platform, also uses k6 OSS, so the skills you learn in this section will apply even if you decide to use k6 Cloud later. 6 | - You can add advanced scenarios and features to your k6 OSS scripts. The other methods of script generation that we'll discuss later are limited in functionality. 7 | 8 | Let's get started! 9 | 10 | ## Installation 11 | 12 | First, install k6 by [following the instructions here](https://k6.io/docs/getting-started/installation/) for your operating system. 13 | 14 | Next, pick your favorite IDE or text editor. Many of us use and recommend [VS Code](https://code.visualstudio.com/), but you can also use [Sublime Text](https://www.sublimetext.com/), [Atom](https://atom.io/), or anything else you're already using that can create text files. 15 | 16 | ## Writing your first k6 script 17 | 18 | Time to write the script! 19 | 20 | k6 supports multiple protocols, but for now, let's stick to HTTP. Your first script will do a basic HTTP POST request against a test API that will echo back whatever you send to it. 21 | 22 | The fastest way to create a k6 test is to use the `k6 new [filename]` command introduced in k6 version 0.48.0. This will automatically create a file with the basic boilerplate you need to get you up and running quickly. 23 | 24 | But, as part of your k6 learning, we will also teach you how to create a test manually. 25 | 26 | Create a new file named `test.js`, and open it in your favorite IDE. This file is our test script. k6 scripts are always written in JavaScript, even though k6 itself is written in Go. We're going to create the script together, step by step. Copy and paste the code snippets as necessary, so that your script looks like the one here. 27 | 28 | Import the HTTP Client from the built-in module `k6/http`: 29 | 30 | ```js 31 | import http from 'k6/http'; 32 | ``` 33 | 34 | Now, create and export a default function: 35 | 36 | ```js 37 | export default function() { 38 | } 39 | ``` 40 | 41 | Any code in the `default` function is executed by each k6 virtual user when the test runs. 42 | 43 | Add the logic for making the actual HTTP call: 44 | 45 | ```js 46 | import http from 'k6/http'; 47 | 48 | export default function() { 49 | let url = 'https://httpbin.test.k6.io/post'; 50 | let response = http.post(url, 'Hello world!'); 51 | } 52 | ``` 53 | 54 | Here, you're instructing k6 to send an HTTP POST request to the API endpoint `https://httpbin.test.k6.io/post` with the body `Hello world!` 55 | 56 | You could actually run this script already, and k6 would make the HTTP POST request, but how would you know if it worked? Here's how to log the response to the console: 57 | 58 | ```js 59 | import http from 'k6/http'; 60 | 61 | export default function() { 62 | let url = 'https://httpbin.test.k6.io/post'; 63 | let response = http.post(url, 'Hello world!'); 64 | 65 | console.log(response.json().data); 66 | } 67 | ``` 68 | 69 | You'll learn more ways to verify the results of your tests later, but for now, go ahead and run your first test! 70 | 71 | ## Hello World: running your k6 script 72 | 73 | Save your script in your editor. Then, open up your terminal and go to the directory where you saved your k6 script. Now, run the test: 74 | 75 | ```js 76 | k6 run test.js 77 | ``` 78 | 79 | You should get something like this: 80 | 81 | ```plain 82 | $ k6 run test.js 83 | 84 | /\ |‾‾| /‾‾/ /‾‾/ 85 | /\ / \ | |/ / / / 86 | / \/ \ | ( / ‾‾\ 87 | / \ | |\ \ | (‾) | 88 | / __________ \ |__| \__\ \_____/ .io 89 | 90 | execution: local 91 | script: test.js 92 | output: - 93 | 94 | scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop): 95 | * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s) 96 | 97 | INFO[0001] Hello world! source=console 98 | 99 | running (00m00.7s), 0/1 VUs, 1 complete and 0 interrupted iterations 100 | default ✓ [======================================] 1 VUs 00m00.7s/10m0s 1/1 iters, 1 per VU 101 | 102 | data_received..................: 5.9 kB 9.0 kB/s 103 | data_sent......................: 564 B 860 B/s 104 | http_req_blocked...............: avg=524.18ms min=524.18ms med=524.18ms max=524.18ms p(90)=524.18ms p(95)=524.18ms 105 | http_req_connecting............: avg=123.28ms min=123.28ms med=123.28ms max=123.28ms p(90)=123.28ms p(95)=123.28ms 106 | http_req_duration..............: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 107 | { expected_response:true }...: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 108 | http_req_failed................: 0.00% ✓ 0 ✗ 1 109 | http_req_receiving.............: avg=165µs min=165µs med=165µs max=165µs p(90)=165µs p(95)=165µs 110 | http_req_sending...............: avg=80µs min=80µs med=80µs max=80µs p(90)=80µs p(95)=80µs 111 | http_req_tls_handshaking.......: avg=399.48ms min=399.48ms med=399.48ms max=399.48ms p(90)=399.48ms p(95)=399.48ms 112 | http_req_waiting...............: avg=129.94ms min=129.94ms med=129.94ms max=129.94ms p(90)=129.94ms p(95)=129.94ms 113 | http_reqs......................: 1 1.525116/s 114 | iteration_duration.............: avg=654.72ms min=654.72ms med=654.72ms max=654.72ms p(90)=654.72ms p(95)=654.72ms 115 | iterations.....................: 1 1.525116/s 116 | 117 | ``` 118 | 119 | That's a lot of metrics! In the next section, we'll go over what each of these lines mean. 120 | 121 | ## Other resources 122 | 123 | [![Week of Load testing day 3: Installing k6 and running load test](../../images/week-of-testing-youtube.png)](https://www.youtube.com/embed/y5tteMKZUqk) 124 | 125 | ## Test your knowledge 126 | 127 | ### Question 1 128 | 129 | What's the best way to access the JSON body of an HTTP response? 130 | 131 | A: `response.json()` 132 | 133 | B: `response.body()` 134 | 135 | C: `response.content` 136 | 137 | ### Question 2 138 | 139 | What is the name of the built-in module that contains the HTTP client? 140 | 141 | A: `http` 142 | 143 | B: `k6/http-client` 144 | 145 | C: `k6/http` 146 | 147 | ### Question 3 148 | 149 | Where in a test script should we place HTTP calls to have them executed by a virtual user? 150 | 151 | A: In the global scope 152 | 153 | B: In an exported default function 154 | 155 | C: In a function called `exec()` 156 | 157 | ### Answers 158 | 159 | 1. A. Using `response.json()` will not only get the response body but also parse the JSON. 160 | 2. C. `k6/http` is the module that needs to be imported in a k6 script if you want to use HTTP. The other two options do not exist. 161 | 3. B. Only code placed within an exported function (either the default one or one that is named in the `exec` option [of a scenario](https://k6.io/docs/using-k6/scenarios/#common-options)) will be executed by a virtual user. 162 | -------------------------------------------------------------------------------- /Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md: -------------------------------------------------------------------------------- 1 | Checks are a type of testing criteria that can be used to verify the response returned by a request. 2 | 3 | Up until now, you've been using `console.log()` to print the response body to the terminal. This approach is useful for debugging, but it can quickly get out of hand if the response body is large or if you decide to run the test with more than one VU. Instead, in this section, you'll learn how to use checks. 4 | 5 | ## How to add checks to your script 6 | 7 | Back to the script! 8 | 9 | You already know the expected response of the target server to the request the script is sending: it should send back whatever the script sends to it. In this case, that's `Hello world!` 10 | 11 | So, remove the `console.log()` statement and add a check by copying this code snippet: 12 | 13 | ```js 14 | import http from 'k6/http'; 15 | import { check } from 'k6'; 16 | 17 | export default function() { 18 | let url = 'https://httpbin.test.k6.io/post'; 19 | let response = http.post(url, 'Hello world!'); 20 | check(response, { 21 | 'Application says hello': (r) => r.body.includes('Hello world!') 22 | }); 23 | } 24 | ``` 25 | 26 | Note that you need to import the `check` from the k6 library: 27 | 28 | ```js 29 | import { check } from 'k6'; 30 | ``` 31 | 32 | And you need to put the actual check in the default function: 33 | 34 | ```js 35 | check(response, { 36 | 'Application says hello': (r) => r.body.includes('Hello world!') 37 | }); 38 | } 39 | ``` 40 | 41 | The check you've just added looks for the string `Hello world!` in the body of the response of *every* request. 42 | 43 | ### Running the script 44 | 45 | What does it look like in the end-of-test summary? Save your script and run: 46 | 47 | ```plain 48 | k6 run test.js 49 | ``` 50 | 51 | and you should see output similar to this: 52 | 53 | ```plain 54 | /\ |‾‾| /‾‾/ /‾‾/ 55 | /\ / \ | |/ / / / 56 | / \/ \ | ( / ‾‾\ 57 | / \ | |\ \ | (‾) | 58 | / __________ \ |__| \__\ \_____/ .io 59 | 60 | execution: local 61 | script: test.js 62 | output: - 63 | 64 | scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop): 65 | * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s) 66 | 67 | 68 | running (00m00.7s), 0/1 VUs, 1 complete and 0 interrupted iterations 69 | default ✓ [======================================] 1 VUs 00m00.7s/10m0s 1/1 iters, 1 per VU 70 | 71 | ✓ Application says hello 72 | 73 | checks.........................: 100.00% ✓ 1 ✗ 0 74 | data_received..................: 5.9 kB 8.4 kB/s 75 | data_sent......................: 564 B 801 B/s 76 | http_req_blocked...............: avg=582.6ms min=582.6ms med=582.6ms max=582.6ms p(90)=582.6ms p(95)=582.6ms 77 | http_req_connecting............: avg=121.14ms min=121.14ms med=121.14ms max=121.14ms p(90)=121.14ms p(95)=121.14ms 78 | http_req_duration..............: avg=120.62ms min=120.62ms med=120.62ms max=120.62ms p(90)=120.62ms p(95)=120.62ms 79 | { expected_response:true }...: avg=120.62ms min=120.62ms med=120.62ms max=120.62ms p(90)=120.62ms p(95)=120.62ms 80 | http_req_failed................: 0.00% ✓ 0 ✗ 1 81 | http_req_receiving.............: avg=72µs min=72µs med=72µs max=72µs p(90)=72µs p(95)=72µs 82 | http_req_sending...............: avg=292µs min=292µs med=292µs max=292µs p(90)=292µs p(95)=292µs 83 | http_req_tls_handshaking.......: avg=405.16ms min=405.16ms med=405.16ms max=405.16ms p(90)=405.16ms p(95)=405.16ms 84 | http_req_waiting...............: avg=120.26ms min=120.26ms med=120.26ms max=120.26ms p(90)=120.26ms p(95)=120.26ms 85 | http_reqs......................: 1 1.419825/s 86 | iteration_duration.............: avg=703.46ms min=703.46ms med=703.46ms max=703.46ms p(90)=703.46ms p(95)=703.46ms 87 | iterations.....................: 1 1.419825/s 88 | ``` 89 | 90 | The new check is displayed in the lines: 91 | 92 | ```plain 93 | ✓ Application says hello 94 | 95 | checks.........................: 100.00% ✓ 1 ✗ 0 96 | ``` 97 | 98 | and the `✓` means that all the requests executed (just one in this test run) passed the check. This test run had a 100% pass rate for checks. 99 | 100 | ### Failed checks 101 | 102 | What does it look like when the check fails? 103 | 104 | Modify the script to search for text that shouldn't be found in the response, like so: 105 | 106 | ```js 107 | import http from 'k6/http'; 108 | import { check } from 'k6'; 109 | 110 | export default function() { 111 | let url = 'https://httpbin.test.k6.io/post'; 112 | let response = http.post(url, 'Hello world!'); 113 | check(response, { 114 | 'Application says hello': (r) => r.body.includes('Bonjour!') 115 | }); 116 | } 117 | ``` 118 | 119 | Run that, and you should get the following result: 120 | 121 | ```plain 122 | ✗ Application says hello 123 | ↳ 0% — ✓ 0 / ✗ 1 124 | 125 | checks.........................: 0.00% ✓ 0 ✗ 1 126 | ``` 127 | 128 | This time, the `✗ 1` indicates that one check failed. 129 | 130 | ### Failed checks are not errors 131 | 132 | You may have noticed that in the last example, `http_req_failed`, or the HTTP error rate, was not affected by the failing check. This is because checks do not stop a script from executing successfully, and they do not return a failed exit status. 133 | 134 | > :bulb: To make failing checks stop your test, you can [combine them with thresholds](https://k6.io/docs/using-k6/thresholds/#failing-a-load-test-using-checks). 135 | 136 | ## Other types of checks 137 | 138 | The [checks](https://k6.io/docs/using-k6/checks/) page contains other types of checks you can do, including the HTTP response code and the response body size. You can also include multiple checks for a single response. 139 | 140 | ## Next up 141 | 142 | You're almost ready to scale up your test to multiple users! Before you do so, the next section discusses how to make your test realistic with think time. 143 | 144 | ## Test your knowledge 145 | 146 | ### Question 1 147 | 148 | Which of the following can you use a check to verify? 149 | 150 | A: Whether the 95th percentage response time of the request was greater than 1s 151 | 152 | B: The size of the response body returned 153 | 154 | C: The error rate of the test 155 | 156 | 157 | ### Question 2 158 | 159 | What part of the test do checks assess? 160 | 161 | A: The response time of a request 162 | 163 | B: The syntax of the request sent to the application 164 | 165 | C: The application server's response 166 | 167 | 168 | ### Question 3 169 | 170 | In the following snippet from the end-of-test summary, how many checks failed? 171 | 172 | ```plain 173 | ✗ Application says hello 174 | ↳ 51% — ✓ 1215 / ✗ 1144 175 | 176 | checks.........................: 51.50% ✓ 1215 ✗ 1144 177 | ``` 178 | 179 | A: 51.50% 180 | 181 | B: 1215 182 | 183 | C: 1144 184 | 185 | ### Answer 186 | 187 | 1. B. The 95th percentile of the response time is an aggregated metric: it relies on measurements from all requests up to that point. This is not something you can create a check for, although you can certainly include this as a [threshold](https://k6.io/docs/using-k6/thresholds/) in your script. The error rate of a test is similarly aggregated. B, the size of the response body, is the correct answer. 188 | 2. C. Checks parse the responses from the application server, not the request sent by k6. 189 | 3. C. The number of checks that failed is displayed in `✗ 1144`. This request's checks passed 51.50% of the time. 190 | -------------------------------------------------------------------------------- /Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md: -------------------------------------------------------------------------------- 1 | # Adding think time using sleep 2 | 3 | Before you ramp up your load tests, there's one more thing to add: think time. 4 | 5 | _Think time_ is the amount of time that a script pauses during test execution to simulate delays that real users have in the course of using an application. 6 | 7 | ### When should you use think time? 8 | 9 | In general, using think time to accurately simulate end users' behavior makes a load testing script more realistic. If realism would help you achieve your test objectives, using think time can help with that. 10 | 11 | You should consider adding think time in the following situations: 12 | - Your test follows a user flow, like accessing different parts of the application in a certain order 13 | - You want to simulate actions that take some time to carry out, like reading text on a page or filling out a form 14 | - Your load generator, or the machine you're running k6 from, displays high (> 80%) CPU utilization during test execution. 15 | 16 | The main danger in removing or reducing think time is that it increases how quickly requests are sent, which can, in turn, increase CPU utilization. When CPU usage is too high, the load generator itself is struggling with *sending* the requests, which could lead to inaccurate results such as false negatives. Adding think time is one way to [reduce high CPU usage](https://k6.io/docs/cloud/analyzing-results/performance-insights/#high-load-generator-cpu-usage). 17 | 18 | ### When shouldn't you use think time? 19 | 20 | Using think time reduces the maximum request rate per VU that you can achieve in your test. It slows down how quickly requests are sent. 21 | 22 | Think time is unnecessary in the following situations: 23 | - You want to do a [stress test](https://k6.io/docs/test-types/stress-testing/) to find out how many requests per second your application can handle 24 | - The API endpoint you're testing experiences a high amount of requests per second in production that occur without delays 25 | - Your load generator can run your test script without crossing the 80% CPU utilization mark. 26 | 27 | The last point is a requirement for ensuring that think time isn't increasing test throughput to such a point as to affect the health of your load generator, as discussed in the previous section. 28 | 29 | As you can see, the question of whether or not to use think time is dependent on your testing goals. When in doubt, use think time. 30 | 31 | ## Sleep 32 | 33 | In k6, you can add think time using [`sleep()`](https://k6.io/docs/javascript-api/k6/sleep-t/). To use it, you'll need to import sleep and then add the sleep to the part of the script within the default function that you want test execution to pause: 34 | 35 | ```js 36 | import http from 'k6/http'; 37 | import { check, sleep } from 'k6'; 38 | 39 | export default function() { 40 | let url = 'https://httpbin.test.k6.io/post'; 41 | let response = http.post(url, 'Hello world!'); 42 | check(response, { 43 | 'Application says hello': (r) => r.body.includes('Hello world!') 44 | }); 45 | 46 | sleep(1); 47 | } 48 | ``` 49 | 50 | `sleep(1);` means that the script will pause for 1 second when it is executed. 51 | 52 | Including sleep does not affect the response time (`http_req_duration`); the response time is always reported with sleep removed. Sleep *is*, however, included in the [iteration duration](03-Understanding-k6-results.md#Iteration-duration). 53 | 54 | ### Dynamic think time 55 | 56 | The problem with hard-coding a delay into your script is that, it introduces an artificial pattern to your test that may later cause load on your application to be more predictable than it would be in production. 57 | 58 | **Testing best practice:** Use dynamic think time. 59 | 60 | A dynamic think time is more realistic, and simulates real users more accurately, in turn improving the accuracy and reliability of your test results. 61 | 62 | #### Random sleep 63 | 64 | One way to implement dynamic think time is to use the JavaScript `Math.random()` function: 65 | 66 | ```js 67 | sleep(Math.random() * 5); 68 | ``` 69 | 70 | The line above instructs k6 to sleep for a random amount of time between 0 and 5, inclusive of 0 but not of 5. The value selected is not guaranteed to be an integer. 71 | 72 | #### Random sleep between 73 | 74 | If you'd prefer to define your think time in integers, try the `randomIntBetween` function from the k6 library of useful functions, called [jslib](https://jslib.k6.io/). 75 | 76 | First, import the relevant function: 77 | 78 | ```js 79 | import { randomIntBetween } from "https://jslib.k6.io/k6-utils/1.0.0/index.js"; 80 | ``` 81 | 82 | Then, add this within your default function: 83 | 84 | ```js 85 | sleep(randomIntBetween(1,5)); 86 | ``` 87 | 88 | The script will pause for a number of seconds between 1 and 5, inclusive of both 1 and 5. 89 | 90 | ## How much think time should you add? 91 | 92 | The real answer is: it depends. 93 | 94 | Some factors that affect the duration of the think time you add are the goals for your testing, what the production traffic looks like, and the computing resources that you have at your disposal. It's best to model test scripts as closely as possible to what occurs in production environments. 95 | 96 | In the absence of any data on production traffic, however, you can time how long it takes for *you* to go through a user flow and use that as a starting point. 97 | 98 | ## Test your knowledge 99 | 100 | ### Question 1 101 | 102 | You're testing a new "Open a Ticket" page where users are asked to type in their name, email address, and a description of their issue, and their responses are sent to the application team. Should you use think time? 103 | 104 | A: Yes, because it will take time for the application team to respond to the ticket. 105 | 106 | B: Yes, because users take time to type out their issue. 107 | 108 | C: No, because the load generator's CPU utilization is too high. 109 | 110 | ### Question 2 111 | 112 | In the following line, what does the number 3 represent? 113 | 114 | `sleep(3)` 115 | 116 | A: A think time of 3 milliseconds 117 | 118 | B: The number of iterations that will get a think time 119 | 120 | C: A think time of 3 seconds 121 | 122 | ### Question 3 123 | 124 | A script without think time runs with a single iteration, and the iteration duration was 5 seconds. What would the iteration duration have been if the script had included a sleep of 1 second? 125 | 126 | A: 5 seconds 127 | 128 | B: 6 seconds 129 | 130 | C: 4 seconds 131 | 132 | ### Answers 133 | 134 | 1. B. A is incorrect because think time simulates the time taken for the user to interact with the application, not the time taken for the application team to respond. C is incorrect because adding think time does not increase CPU utilization; in fact, the opposite is true in that in reduces CPU utilization by spacing out requests. 135 | 2. C. The parameter of `sleep()` takes a value given in seconds, so `sleep(3)` will pause script execution for 3 seconds. 136 | 3. B. If a script takes 5 second to execute without sleep, and a sleep of 1 second is included, then the script will take 6 seconds to execute. -------------------------------------------------------------------------------- /Modules/II-k6-Foundations/06-k6-Load-Test-Options.md: -------------------------------------------------------------------------------- 1 | # k6 Load Test Options 2 | 3 | Up until now, you've been running the same script with a single VU and a single iteration. In this section, you'll learn how to scale that out and run a full-sized load test against your application. 4 | 5 | Test options are configuration values that affect how your test script is executed, such as the number of VUs or iterations, the duration of your test, and more. They are also sometimes called "test parameters". 6 | 7 | k6 comes with some default test options, but there are four different ways to change the test parameters for a script: 8 | 1. You can include command-line flags when running a k6 script (such as `k6 run --vus 10 --iterations 30`). 9 | 2. You can define [environment variables](https://k6.io/docs/using-k6/environment-variables/) on the command-line that are passed to the script. 10 | 3. You can define them within the test script itself. 11 | 4. You can include a configuration file. 12 | 13 | For now, you'll learn to do the third option: defining test parameters within the script itself. The advantages of this approach are: 14 | - Simplicity: no extra files or commands are required. 15 | - Repeatability: Adding these parameters to the script make it easier for a colleague to run tests you've written. 16 | - Version controllability: Changes to the test parameters can be tracked along with test code. 17 | 18 | To use test options within a script, add the following lines to your script. By convention, it's best to add it after the import statements and before the default function, so that the options are easily read upon opening the script: 19 | 20 | ```js 21 | export let options = { 22 | vus: 10, 23 | iterations: 40, 24 | }; 25 | ``` 26 | 27 | If you set multiple options, make sure you end each one with `,`. 28 | 29 | ## VUs 30 | 31 | ```js 32 | vus: 10, 33 | ``` 34 | 35 | In this line, you can change the number of virtual users that k6 will run. 36 | 37 | Note that if you only define VUs and no other test options, you may get the following error: 38 | 39 | ```plain 40 | /\ |‾‾| /‾‾/ /‾‾/ 41 | /\ / \ | |/ / / / 42 | / \/ \ | ( / ‾‾\ 43 | / \ | |\ \ | (‾) | 44 | / __________ \ |__| \__\ \_____/ .io 45 | 46 | WARN[0000] the `vus=10` option will be ignored, it only works in conjunction with `iterations`, `duration`, or `stages` 47 | execution: local 48 | script: test.js 49 | output: - 50 | ``` 51 | 52 | If you set the number of VUs, you need to additionally specify how long those users should be executed for, using one of the following options: 53 | - iterations 54 | - durations 55 | - stages 56 | 57 | ## Iterations 58 | 59 | ```js 60 | vus: 10, 61 | iterations: 40, 62 | ``` 63 | 64 | Setting the number of iterations in test options defines it for *all* users. In the example above, the test will run for a total of 40 iterations, with each of the 10 users executing the script exactly 4 times. 65 | 66 | ## Duration 67 | 68 | ```js 69 | vus: 10, 70 | duration: '2m' 71 | ``` 72 | 73 | Setting the duration instructs k6 to repeat (iterate) the script for each of the specified number of users until the duration is reached. 74 | 75 | Duration can be set using `h` for hours, `m` for minutes, and `s` for seconds, like these examples: 76 | - `duration: '1h30m'` 77 | - `duration: '30s'` 78 | - `duration: '5m30s'` 79 | 80 | If you set duration but don't specify a number of VUs, k6 will use the default VU number of 1. 81 | 82 | If you set the duration in conjunction with setting the number of iterations, the value that ends earlier is used. For example, given the following options: 83 | 84 | ```js 85 | vus: 10, 86 | duration: '5m', 87 | iterations: 40, 88 | ``` 89 | 90 | k6 will execute the test for 40 iterations or 5 minutes, *whichever ends earlier*. If it takes 1 minute to finish 40 total iterations, the test will end after 1 minute. If it takes 10 minutes to finish 40 total iterations, the test will end after 5 minutes. 91 | 92 | ### Stages 93 | 94 | Defining iterations and durations both cause k6 to execute your test script using a [simple load profile](../XX-Future-Ideas/Parameters-of-a-load-test.md#Simple-load-profile): VUs are started, sustained for a certain time or number of iterations, and then ended. 95 | 96 | ![A simple load profile](../../images/load_profile-no_ramp-up_or_ramp-down.png) 97 | 98 | _Simple load profile_ 99 | 100 | What if you want to add a [ramp-up or ramp-down](../XX-Future-Ideas/Parameters-of-a-load-test.md#ramp-up-and-ramp-down-periods), so that the profile looks more like this? 101 | 102 | ![Constant load profile, with ramps](../../images/load_profile-constant.png.png) 103 | 104 | _Constant load profile, with ramps_ 105 | 106 | In that case, you may want to use [stages](https://k6.io/docs/using-k6/options/#stages). 107 | 108 | ```js 109 | export let options = { 110 | stages: [ 111 | { duration: '30m', target: 100 }, 112 | { duration: '1h', target: 100 }, 113 | { duration: '5m', target: 0 }, 114 | ], 115 | }; 116 | ``` 117 | 118 | The stages option lets you define different steps or phases for your load test, each of which can be configured with a number of VUs and duration. The example above consists of three steps (but you can add more if you'd like). 119 | 120 | 1. The first step is a gradual ramp-up from 0 VUs to 100 VUs. 121 | 2. The second step defines the [steady state](../XX-Future-Ideas/Parameters-of-a-load-test.md#Steady-state). The load is held constant at 100 VUs for 1 hour. 122 | 3. Then, the third step is a gradual ramp-down from 100 VUs back to 0, at which point the test ends. 123 | 124 | Stages are the most versatile way to define test parameters for a single scenario. They give you flexibility in shaping the load of your test to match the situation in production that you're trying to simulate. 125 | 126 | ## The full script so far 127 | 128 | If you're using stages, here's what your script should look like so far: 129 | 130 | ```js 131 | import http from 'k6/http'; 132 | import { check, sleep } from 'k6'; 133 | 134 | export let options = { 135 | stages: [ 136 | { duration: '30m', target: 100 }, 137 | { duration: '1h', target: 100 }, 138 | { duration: '5m', target: 0 }, 139 | ], 140 | }; 141 | 142 | export default function() { 143 | let url = 'https://httpbin.test.k6.io/post'; 144 | let response = http.post(url, 'Hello world!'); 145 | check(response, { 146 | 'Application says hello': (r) => r.body.includes('Hello world!') 147 | }); 148 | 149 | sleep(Math.random() * 5); 150 | } 151 | ``` 152 | 153 | ## Test your knowledge 154 | 155 | ### Question 1 156 | 157 | You've been instructed to create a script that sends the same HTTP request exactly 100 times. Which of the following test options is the best way to accomplish this task? 158 | 159 | A: Iterations 160 | 161 | B: Stages 162 | 163 | C: Duration 164 | 165 | 166 | ### Question 2 167 | 168 | Given the test options as specified below, how long will the test be executed? 169 | 170 | ```js 171 | export let options = { 172 | vus: 10, 173 | iterations: 3, 174 | duration: '1h', 175 | }; 176 | ``` 177 | 178 | A: 10 hours 179 | 180 | B: As long as it takes to finish 3 iterations or 1h, whichever is shorter 181 | 182 | C: 1 hour plus as long as it takes to finish 3 iterations 183 | 184 | 185 | ### Question 3 186 | 187 | Which of the following test options will yield a stepped load pattern that adds 100 users within 10 minutes, holds that load steady for 30 minutes, and then continues that pattern until 300 VUs have been running for 30 minutes? 188 | 189 | A: 190 | ```js 191 | export let options = { 192 | stages: [ 193 | { duration: '10m', target: 100 }, 194 | { duration: '30m', target: 100 }, 195 | { duration: '10m', target: 200 }, 196 | { duration: '30m', target: 200 }, 197 | { duration: '10m', target: 300 }, 198 | { duration: '30m', target: 300 }, 199 | ], 200 | }; 201 | ``` 202 | 203 | B: 204 | 205 | ```js 206 | export let options = { 207 | stages: [ 208 | { duration: '30m', target: 300 }, 209 | }; 210 | ``` 211 | 212 | C: 213 | 214 | ```js 215 | export let options = { 216 | vus: 300, 217 | duration: '30m', 218 | }; 219 | ``` 220 | 221 | 222 | ### Answers 223 | 224 | 1. A. Setting the number of iterations to 100 would be the best way to accomplish this task. Stages for [some executors](https://k6.io/docs/using-k6/scenarios/executors/ramping-arrival-rate) do allow you to do this as well, but not for all of them. Duration only changes how long the test will run for, not specifically how many times it iterates. 225 | 2. B. In the case of contradicting parameters, k6 will run for the shorter amount of time. In this case, 3 iterations will likely take less time than 1 hour, so the test will finish in less time than an hour. 226 | 3. A. B and C are incorrect because they both describe a test that will gradually and evenly add virtual users until there are 300 running. That is, the rate of increase is steady. A is the only correct one because it's the only one that includes a steady state (a period of no VU increases) between periods of ramp-up. 227 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/03-Workload-modeling.md: -------------------------------------------------------------------------------- 1 | # Workload modeling 2 | 3 | It's not enough to know _what_ to test (which pages or endpoints to hit)—you should also think about _how_ to test. How many virtual users should you simulate? Will those users pause their execution to simulate "think times" of real users? Are the users new or returning? The answers to these questions can affect your test results. 4 | 5 | The process of **workload modeling** involves determining *how* to apply load against the system, and it's essential for a successful load test. 6 | 7 | Your workload model is heavily influenced by the situations or scenarios you'd like to test. The closer your load test gets to simulating those circumstances, the more *realistic* it is. Realism could mean simulating peak traffic in production, but it could also mean simulating smaller and more targeted traffic against a particular component of your system. It all depends on the situation you're trying to recreate and test. 8 | 9 | If your load-testing script isn't realistic enough, you may not achieve the expected test throughput, or you may not test the same components of a system that real users hit in production. Unrealistic test scripts and scenarios can lead to inconsistent and inaccurate results. More dangerously, they can create a false sense of confidence in what a system can withstand. 10 | 11 | ## Challenges in workload modeling 12 | 13 | Making scripts and scenarios realistic increases the value that a load test can provide. However, test realism is not an easy task. Increasing the realism of a load test can often increase the amount of time and effort required to create and maintain your test suite. There are also many factors that make human behavior hard to simulate: 14 | 15 | - **Computers are faster than humans**. Automated simulations can be executed at inhuman speeds. A machine does not have to stop about what to do next. 16 | - **Human behavior is unpredictable**. Sometimes, humans don't do the most logical or reasonable thing. Historical data can help identify exactly how your end users behave and inform your load-test-script behavior. 17 | - **User flows can be complex**. As systems grow in scope, the number of user flows that a load test needs simulate to be realistic increase as well. A load test may need to cover multiple end-to-end flows, each of which may require different test parameters. 18 | - **Distributed systems come with multiple points of failure**. Software with event-driven or microservices-based architectures have many modular components, each of which may need to be tested and monitored. 19 | - **Many systems have multiple traffic sources.** Users' geographical locations and internet speeds affect the load they apply on the system. 20 | 21 | So, how can we make automated tests realistic despite these obstacles? 22 | 23 | ## Elements of a workload model 24 | 25 | When creating a workload model, consider these variables. 26 | 27 | ### Test parameters 28 | 29 | Test parameters are values that affect how your load-testing script is executed. Parameters include: 30 | - Duration 31 | - Number of VUs 32 | - Number of iterations 33 | - Load profile, including stages during the test 34 | 35 | Each of these parameters influences the amount and type of load that is simulated. 36 | 37 | Check out [k6 Load Test Options](../II-k6-Foundations/06-k6-Load-Test-Options.md) for more information on these parameters, or [Setting load profiles with executors](08-Setting-load-profiles-with-executors.md) for instructions on how to implement these in k6. 38 | 39 | ### Think time and pacing 40 | 41 | Think time and pacing are both types of delays in the script that simulate the pauses a real user is likely to take while accessing an application. Think time, often called sleep, can be added before or after actions within the script, while pacing is typically added between iterations. 42 | 43 | Longer delays mean fewer requests, given a fixed duration, and shorter delays mean more requests. The number of requests and how quickly they are sent may change how your system responds. We also recommend using dynamic delays if you'd like to make the generated load look a little less regular and uniform. 44 | 45 | See [Adding think time using sleep](../II-k6-Foundations/05-Adding-think-time-using-sleep.md) for how to implement delays in k6. 46 | 47 | ### Adding static resources 48 | 49 | In web applications, static resources refer to images, client-side scripts, fonts, and other files embedded onto a page. If you want your script to access that page, you have to decide whether you want the script to also download those resources. 50 | 51 | Downloading static resources makes the script more realistic if you want to simulate an end-user behavior, because web browsers automatically download them. However, if you want to download only the HTML of the page (for example, perhaps because the images are served by a CDN (Content Delivery Network) that you don't want to test), it may be more prudent *not* to download the static resources. 52 | 53 | ### Parallel requests 54 | 55 | Parallel requests are requests that are sent concurrently. Modern browsers request a certain number of static resources at the same time, so if your script requests them sequentially, that can change the load applied on your system. 56 | 57 | Refer to [Parallel requests in k6](05-Parallel-requests-in-k6.md) for instructions on use batching to implement parallel requests. 58 | 59 | ### Cache and cookie behavior 60 | 61 | When users visit a website, some resources may be saved in a cache so that subsequent requests don't require those resources to be downloaded anew. Cookies are small bits of information about previous user activities (such as the last time they visited a site) that are saved for functional, analytical, or marketing purposes. 62 | 63 | Both caches and cookies add to the overall load that a script generates and should be tailored to the test objectives. First-time visitors to a site won't have resources cached locally, but repeat visitors may be retrieving resources from the cache. 64 | 65 | In k6, you can set cache options [using headers](https://k6.io/docs/using-k6/http-requests/#making-http-requests) and [manage cookies in a few ways](https://k6.io/docs/examples/cookies-example/). 66 | 67 | ### Test data 68 | 69 | Requesting the same resources over and over again in your script can lead to some problems. It can trigger caching on the server side, cause security errors if your script logs in with the same user repeatedly, and limit the scope of your tests as other resources are not requested. 70 | 71 | ## Test your knowledge 72 | 73 | ### Question 1 74 | 75 | Which of the following issues could have been caused by incomplete or inaccurate workload modeling? 76 | 77 | A: The load testing tool selected cannot generate the load required and scripts must be rewritten in another tool. 78 | 79 | B: Performance-related production incidents occur despite previous successful load tests. 80 | 81 | C: Development effort is wasted on features that users do not want. 82 | 83 | ### Question 2 84 | 85 | Which of the following facts about a hypothetical application are not relevant when building a workload model? 86 | 87 | A: Most end users live in Australia. 88 | 89 | B: Most end users are single mothers. 90 | 91 | C: Most end users access the application on their mobiles, using 3G/4G networks. 92 | 93 | ### Question 3 94 | 95 | Which of the following changes would increase the load on an application server? 96 | 97 | A: Decreasing think time 98 | 99 | B: Enabling caching 100 | 101 | C: Disabling downloading of static resources 102 | 103 | ### Answers 104 | 105 | 1. B. Production incidents despite performance testing is a classic symptom that the situation during a load test did not match the situation during production. This may be due to many factors, one of which is workload modeling. 106 | 2. B. Users' geographical locations are relevant because Content Distribution Networks (CDNs) do not have servers in every country, and their absence may affect overall user response time. Users' network speeds may similarly affect response times. B is the correct answer because whether or not a user is a single mother is relevant for business demographics, but not response times. 107 | 3. A. Decreasing think time will cause a script to be executed more rapidly, and in some cases may contribute to increased resource utilization on both load generators and application servers. Enabling caching and disabling static resources have the opposite effect. 108 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/04-Adding-test-data.md: -------------------------------------------------------------------------------- 1 | # Adding test data 2 | In this section, you'll learn the best ways to add test data to your load testing script, making it more dynamic and realistic. 3 | 4 | ## Why add test data? 5 | 6 | Test data is information designed to be used by a test script during execution. The information makes the test more realistic by using different values for certain parameters. Repeatedly sending the same values for things like usernames and passwords may cause caching to occur. 7 | 8 | A cache represents content that is saved with the goal of increased application performance. Caching can occur on the server side or on the client side. A server-side cache saves commonly requested resources (such as those on the homepage) so that when those resources are requested, the server can return the resources immediately instead of having to fetch them from a downstream server. 9 | 10 | A client-side cache might save the same resources so that a user's browser knows it doesn't have to make a request for those resources unless they have been changed. This implementation reduces the number of network requests that need to be sent and is especially important for apps that are predominantly accessed through a mobile device. 11 | 12 | Caching can significantly affect load-testing results. There are situations where caching should be enabled (such as when attempting to simulate the behavior of return customers), but there are also situations where caching should be disabled (such as when simulating brand-new users). Either way, you should decide on your caching strategy intentionally, and script accordingly. 13 | 14 | Adding test data can help prevent server-side caching. Common test data includes: 15 | - Usernames and passwords, for logging into an application and performing authenticated actions 16 | - Names, addresses, emails, and other personal information for signing up for accounts or filling out contact forms 17 | - Product names for retrieving product pages 18 | - Keywords for searching a catalog 19 | - PDFs to test uploading 20 | 21 | ## Array 22 | 23 | The simplest way to add test data is with an array. In [Dynamic correlation in k6](02-Dynamic-correlation-in-k6.md), you defined an array like this: 24 | 25 | ```js 26 | let usernameArr = ['admin', 'test_user']; 27 | let passwordArr = ['123', '1234']; 28 | ``` 29 | 30 | After defining arrays, you can generate a random number to randomly pick a value from the array: 31 | 32 | ```js 33 | // Get random username and password from array 34 | let rand = Math.floor(Math.random() * usernameArr.length); 35 | let username = usernameArr[rand]; 36 | let password = passwordArr[rand]; 37 | console.log('username: ' + username, ' / password: ' + password); 38 | ``` 39 | 40 | An array is best used for very short lists of text, as in this example, or for debugging a test script. 41 | 42 | ## CSV Files 43 | 44 | CSV files are lists of information that are made up of _comma-separated values_ and are saved separately from the script. 45 | 46 | Below is an example CSV file named `users.csv` containing usernames and passwords: 47 | ```plain 48 | username,password 49 | admin,123 50 | test_user,1234 51 | ... 52 | ``` 53 | 54 | In k6, the CSV files can then be added to the script like this: 55 | 56 | ```js 57 | import papaparse from 'https://jslib.k6.io/papaparse/5.1.1/index.js'; 58 | 59 | const csvData = papaparse.parse(open('users.csv'), { header: true }).data; 60 | 61 | export default function () { 62 | let rand = Math.floor(Math.random() * csvData.length); 63 | console.log('username: ', csvData[rand].username, ' / password: ', csvData[rand].password); 64 | } 65 | ``` 66 | 67 | The code imports a library called `papaparse` that lets you interact with CSV files. CSV files are best used when generating new usernames and accounts or when you'd like to be able to open the test data file in a spreadsheet application. 68 | 69 | > k6 does not allow you to place code that reads from a local filesystem within the default function. This restriction enforces the best practice of placing file reads in the init context (outside the default function). You can read more about that in [Test life cycle](https://k6.io/docs/using-k6/test-life-cycle/). 70 | 71 | ## JSON files 72 | 73 | You can also store your test data in JSON files. 74 | 75 | Below is a JSON file called `users.json`: 76 | 77 | ```plain 78 | { 79 | "users": [ 80 | { "username": "admin", "password": "123" }, 81 | { "username": "test_user", "password": "1234" } 82 | ] 83 | } 84 | ``` 85 | 86 | You can then use it in your k6 script like this: 87 | 88 | ```js 89 | const jsonData = JSON.parse(open('./users.json')).users; 90 | 91 | export default function () { 92 | let rand = Math.floor(Math.random() * jsonData.length); 93 | console.log('username: ', jsonData[rand].username, ' / password: ', jsonData[rand].password); 94 | } 95 | ``` 96 | 97 | JSON files are best used when the information to be used for the test is already exported in JSON format, or when the data is more hierarchical than a CSV file can handle. 98 | 99 | ## Shared Array 100 | 101 | A Shared Array combines some elements of the previous three approaches (the simple array, CSV files, and JSON files) while addressing a common issue with test data during test execution: high resource utilization. 102 | 103 | In the previous approaches, when a file is used in a test, multiple copies of the file are created and sent to a load generator. When the file is very large, this can unnecessarily use up resources on the load generator, making test results less accurate. 104 | 105 | To prevent this, use a `SharedArray`: 106 | 107 | ```js 108 | import papaparse from 'https://jslib.k6.io/papaparse/5.1.1/index.js'; 109 | import { SharedArray } from "k6/data"; 110 | 111 | const sharedData = new SharedArray("Shared Logins", function() { 112 | let data = papaparse.parse(open('users.csv'), { header: true }).data; 113 | return data; 114 | }); 115 | 116 | export default function () { 117 | let rand = Math.floor(Math.random() * sharedData.length); 118 | console.log('username: ', sharedData[rand].username, ' / password: ', sharedData[rand].password); 119 | } 120 | ``` 121 | 122 | Note that the SharedArray must be combined with other approaches―in this case, a CSV file. 123 | 124 | Using a SharedArray is the most efficient way to add a list of data within a k6 test. 125 | 126 | ## Other test data 127 | 128 | Test data can also refer to files that need to be uploaded as part of the scenario under test. For example, an image might need to be uploaded to simulate a user uploading their profile photo. For more information on scripting these types of data uploads, [check the documentation here.](https://k6.io/docs/examples/data-uploads/) 129 | 130 | ## Test your knowledge 131 | 132 | ### Question 1 133 | 134 | In which of the following situations would it be advisable to add test data to your load testing scripts? 135 | 136 | A: Your application locks out a user account after three logins in a short amount of time. 137 | 138 | B: You want to see how the application behaves when the same user refreshes a page repeatedly. 139 | 140 | C: Both A and B. 141 | 142 | ### Question 2 143 | 144 | You have a CSV file with 100 MB of personal information that you'd like to use as test data. Which of the following approaches is the best one to use? 145 | 146 | A: SharedArray 147 | 148 | B: Simple array 149 | 150 | C: JSON files because the CSV is better converted into JSON 151 | 152 | ### Question 3 153 | 154 | All the examples on this page include a `Math.random()` function to randomly select an element from the data file. In which situations might you want to remove this randomization? 155 | 156 | A: When you want to prevent server-side caching. 157 | 158 | B: When you want to guarantee that each element of test data has been sequentially utilized by the script. 159 | 160 | C: When you want to make your tests as realistic as possible. 161 | 162 | ### Answers 163 | 164 | 1. A. You could resolve the situation described in A by adding test data of different logins that the script could use. B is incorrect, because it is in fact a good example of a use case where *not* including test data may be the better option. 165 | 2. A. Very large data files can have an impact on load testing results when they are copied and transferred repeatedly to every load generator. The SharedArray is a better way to handle these, although it may also be worthwhile to consider storing test data in a database. 166 | 3. B. Randomly selecting test data can prevent caching and make the test more realistic. However, random selection can also make it more difficult to determine which test data the test utilized as the data file is not sequentially parsed. 167 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/05-Parallel-requests-in-k6.md: -------------------------------------------------------------------------------- 1 | # Parallel requests in k6 2 | Parallel requests are requests that are sent at the same time. Parallel requests are sometimes called "concurrent" requests. 3 | 4 | This screenshot shows the Network panel in Chrome DevTools while a site is loaded. Each bar shows when resources embedded in the page were requested throughout the first 4000 ms. 5 | 6 | ![](../../images/parallel-requests.png) 7 | 8 | The orange arrow points to three parallel requests. This indicates that the client (in this case, the browser) sent three different HTTP requests at the same time. 9 | 10 | Parallel requests influence load testing results because they increase test throughput. Increased test throughput affects load testing in two ways: 11 | - First, sending requests in parallel takes less time than sending them sequentially, so the load generator may experience higher resource utilization over a shorted amount of time. 12 | - Second, parallel requests also get to the application server faster, which could increase resource utilization on the server side as well. 13 | 14 | If parallel requests potentially increase utilization on the load generator running the script _and_ the application server, why would we use them? 15 | 16 | ## When should you use parallel requests? 17 | 18 | Parallel requests should be used when they are also used in production. In these situations, parallel requests make a test more realistic. Modern browsers have a degree of parallelism by default, so a load test for a website or web app should typically account for parallel requests. 19 | 20 | In contrast, API calls may _not_ typically be triggered simultaneously, so it may be more realistic to send requests sequentially. 21 | 22 | When in doubt about whether you should use parallel requests, use your browser's DevTools or a proxy sniffer while you access your application to see how requests are sent in production. 23 | 24 | ## Batching in k6 25 | 26 | By default, each k6 user sends each request in the script sequentially. To change this behavior, you can use batching. 27 | 28 | Batching signals to k6 that the requests within a batch must be sent simultaneously, and you can use it like this: 29 | 30 | ```js 31 | import { check } from 'k6'; 32 | import http from 'k6/http'; 33 | 34 | const domain = 'https://test.k6.io'; 35 | 36 | export default function () { 37 | let responses = http.batch([ 38 | ['GET', domain + '/'], 39 | ['GET', domain + '/static/css/site.css'], 40 | ['GET', domain + '/static/js/prisms.js'], 41 | ['GET', domain + '/static/favicon.ico'] 42 | ]) 43 | check(responses[0], { 44 | 'Homepage successfully loaded': (r) => r.body.includes("Collection of simple web-pages suitable for load testing"), 45 | }); 46 | } 47 | ``` 48 | 49 | The script above sends four requests in parallel, all of which are embedded into the homepage. Each of the responses is saved as an array in the variable `responses`, and the check verifies that the very first element of that array, `responses[0]`, the homepage, contains text that proves that the script successfully retrieved the HTML body. 50 | 51 | You can batch requests even if they are not all HTTP GET requests. For example, you could use HTTP GET and HTTP POST requests in the same batch. You can find [more information here](https://k6.io/docs/javascript-api/k6-http/batch-requests/). 52 | 53 | ## Test your knowledge 54 | 55 | ### Question 1 56 | 57 | Which of the following statements is true? 58 | 59 | A: Whether or not to use parallel requests depends on your test scenario. 60 | 61 | B: Using parallel requests is always recommended as a performance testing best practice. 62 | 63 | C: Parallel requests should always be used for testing websites. 64 | 65 | ### Question 2 66 | 67 | Which of the following might be a side effect of you including parallel requests in your script? 68 | 69 | A: Your load generator sends fewer requests when some are batched. 70 | 71 | B: Your load test's executed requests per second (rps) will increase. 72 | 73 | C: The application server caches batched requests and performance is improved. 74 | 75 | ### Question 3 76 | 77 | When might you _not_ want to use parallel requests? 78 | 79 | A: When you want to increase the number of requests your test is sending 80 | 81 | B: When you have requests using multiple types of HTTP methods 82 | 83 | C: When you're testing API endpoints 84 | 85 | ### Answers 86 | 87 | 1. A. Parallel requests are sometimes, but not always, useful. For example, they are useful if you're trying to simulate a user accessing a web app on a browser, but not so useful if you're trying to mimic sequential requests to an API endpoint. 88 | 2. B. All other things being equal, increasing the parallelism of your requests will increase the throughput (rps) of your test. 89 | 3. A. The key here is your objective, not *what* you're testing. If you want to increase your test throughput (the number of requests sent by k6), parallel requests *would* be a valid way to do that. Whether you're using multiple HTTP methods or testing API endpoints is beside the point. 90 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors/Constant-VUs-Exercises.md: -------------------------------------------------------------------------------- 1 | # Constant VUs Executor 2 | 3 | As noted in [Setting load profiles with executors](../08-Setting-load-profiles-with-executors.md#Constant-VUs), _Constant VUs_ executor has a primary focus on the number of _virtual users (VUs)_ running for a specified timeframe. 4 | 5 | ## Exercises 6 | 7 | For our exercises, we're going to start by using a very basic script that simply performs an HTTP request and then waits one second before completing the test iteration. We're providing some console output as things change. 8 | 9 | ### Creating our script 10 | 11 | Let's begin by implementing our test script. Create a file named _test.js_ with the following content: 12 | 13 | ```js 14 | import http from 'k6/http'; 15 | import { sleep } from 'k6'; 16 | 17 | export const options = { 18 | scenarios: { 19 | k6_workshop: { 20 | executor: 'constant-vus', 21 | duration: '30s', 22 | }, 23 | }, 24 | }; 25 | 26 | export default function () { 27 | console.log(`[VU: ${__VU}, iteration: ${__ITER}] Starting iteration...`); 28 | http.get('https://test.k6.io/contacts.php'); 29 | sleep(1); 30 | } 31 | ``` 32 | 33 | > :point_up: If you've been working through these workshop exercises in order, you may have noticed that this time, our initial script includes a `duration`. This option is **required** for the executor. Attempting to run the script without this option will result in a configuration error. 34 | 35 | ### Initial test run 36 | 37 | We're starting with the bare minimum to use the executor, which requires both the `executor` as well as the `duration` for the test. Now that we've defined our basic script, we'll go ahead and run k6: 38 | 39 | ```bash 40 | k6 run test.js 41 | ``` 42 | 43 | Looking at the output from the running script, you should see a single virtual user (VU) is continually performing test iterations. The test will be complete once the configured 30-second duration has been reached. The number of iterations performed is completely dependent upon the code being performed in the `default function ()` code block. 44 | 45 | ```bash 46 | INFO[0026] [VU: 1, iteration: 24] Starting iteration... source=console 47 | INFO[0027] [VU: 1, iteration: 25] Starting iteration... source=console 48 | INFO[0028] [VU: 1, iteration: 26] Starting iteration... source=console 49 | INFO[0029] [VU: 1, iteration: 27] Starting iteration... source=console 50 | 51 | running (0m30.5s), 0/1 VUs, 28 complete and 0 interrupted iterations 52 | k6_workshop ✓ [======================================] 1 VUs 30s 53 | ``` 54 | 55 | ### Change the concurrency 56 | 57 | Once again, our test has only run a single virtual user or VU. We can update the `vus` option to increase the number of requests being performed simultaneously. Let's change the `vus` to simulate 10 users by updating the `options` section of our test script: 58 | 59 | ```js 60 | export const options = { 61 | scenarios: { 62 | k6_workshop: { 63 | executor: 'constant-vus', 64 | duration: '30s', 65 | vus: 10, 66 | }, 67 | }, 68 | }; 69 | ``` 70 | 71 | Run the script once more: 72 | 73 | ```bash 74 | k6 run test.js 75 | ``` 76 | 77 | Now, you should see that much more activity as we now have 10 VUs performing iterations over and over until the configured duration has been reached. 78 | 79 | ```bash 80 | INFO[0029] [VU: 4, iteration: 27] Starting iteration... source=console 81 | INFO[0029] [VU: 8, iteration: 27] Starting iteration... source=console 82 | INFO[0029] [VU: 3, iteration: 27] Starting iteration... source=console 83 | INFO[0029] [VU: 9, iteration: 27] Starting iteration... source=console 84 | INFO[0029] [VU: 6, iteration: 27] Starting iteration... source=console 85 | 86 | running (0m30.5s), 00/10 VUs, 280 complete and 0 interrupted iterations 87 | k6_workshop ✓ [======================================] 10 VUs 30s 88 | ``` 89 | 90 | > :point_up: The iteration counter is 0-based, meaning a count of 27 is _actually_ 28 iterations. 91 | 92 | From the results, you may see some interleaving of requests as well as some VUs potentially performing more iterations than others. This will be dependent upon response times for individual requests. 93 | 94 | ### Wrapping up 95 | 96 | With this exercise, you should see how to run a very basic test and how you can control the amount of concurrency for a specified duration. Additionally, you learned that the number of iterations performed is completely dependent upon the time required to complete each iteration of the `default function ()` block. 97 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors/Externally-Controlled-Exercises.md: -------------------------------------------------------------------------------- 1 | # Externally Controlled Executor 2 | 3 | As noted in [Setting load profiles with executors](../08-Setting-load-profiles-with-executors.md#Externally-Controlled), this particular executor delegates the control of VUs and the running state of tests to external processes. Feel free to use Bash, Python, or some automation component; the source of these processes is of no consequence to the executor. 4 | 5 | The focus of the executor will be to set up the test scenario and provide constraints on the overall duration and an allowable number of virtual users. From this point, a running test will be in somewhat of a _holding pattern_ waiting for further instructions. 6 | 7 | Interaction with the running test utilizes either the [REST APIs](https://k6.io/docs/misc/k6-rest-api/) exposed by the k6 process, or by using the `k6` command line interface ([CLI](https://k6.io/blog/how-to-control-a-live-k6-test/)). 8 | 9 | ## Exercises 10 | For our exercises, we're going to start by using a very basic script that simply performs an HTTP request and then waits three seconds before completing the test iteration. We're providing some console output as things change. 11 | 12 | The configured `options` will be the absolute minimum required for the `externally-controlled` executor. 13 | 14 | ### Create our test script 15 | Create a file named _test.js_ with the following content: 16 | ```js 17 | import http from 'k6/http'; 18 | import { sleep } from 'k6'; 19 | 20 | export const options = { 21 | scenarios: { 22 | k6_workshop: { 23 | executor: 'externally-controlled', 24 | duration: '5m', 25 | }, 26 | }, 27 | }; 28 | 29 | export default function () { 30 | console.log(`[VU: ${__VU}, iteration: ${__ITER}] Starting iteration...`); 31 | http.get('https://test.k6.io/contacts.php'); 32 | sleep(3); 33 | } 34 | ``` 35 | 36 | ### Initial test run 37 | Open a _terminal_ window within the same directory as the `test.js` script. We're now going to execute our script using the k6 binary: 38 | ```bash 39 | k6 run test.js 40 | ``` 41 | k6 should now start. You should see a timer counting up as the test is running, but nothing much is happening. We're not seeing our expected console message included in our script! We'll stay in this holding pattern until the timer reaches the configured `duration` from the script options. 42 | 43 | ### Scaling up VUs 44 | No tests were actually performed due to there being `0` virtual users (VUs) started by the script. k6 waits for the external process to _scale up_ VUs. To scale up, we're going to use the 'k6' command line. 45 | 46 | Let's open another _terminal_ window. This time, however, the directory should not matter. If your script timed out in the meantime, start it back up again using `k6 run test.js`. 47 | ```bash 48 | k6 scale --vus 2 --max 10 49 | ``` 50 | > :point_up: If you don't specify the `maxVUs` in your script options, any request to scale up will fail unless you provide a `--max` with the scale request! 51 | 52 | Now that VUs have been scaled up, you should now see console output showing that each virtual user is now executing the test code. 53 | 54 | ### Controlling the execution engine 55 | Continuing with the currently running test from the previous step, let's play with other options available to control the overall test execution. 56 | 57 | Using an idle _terminal_ window (one not currently running the k6 test), we'll issue the following commands: 58 | ```bash 59 | # Halt the currently running test; console output should stop 60 | k6 pause 61 | 62 | # Go ahead and scale up the desired number of VUs 63 | k6 scale --vus 5 64 | 65 | # Resume the test; output should confirm the test is running with additional VUs 66 | k6 resume 67 | ``` 68 | 69 | ### Inspecting state 70 | At any time, you can use the `k6` command to inquire as to the state of a running instance: 71 | 72 | ```bash 73 | $ k6 status 74 | status: 7 75 | paused: "false" 76 | vus: "5" 77 | vus-max: "10" 78 | stopped: false 79 | running: true 80 | tainted: false 81 | ``` 82 | 83 | ### Gather a metrics snapshot 84 | While your script is running, whether paused or active, you can poll the current metrics from the running script. This will dump the current metrics in YAML format to your console which can be _piped_ to an output file. 85 | 86 | ```bash 87 | $ k6 stats 88 | ... 89 | - name: http_req_duration 90 | type: 91 | type: trend 92 | valid: true 93 | contains: 94 | type: time 95 | valid: true 96 | tainted: "" 97 | sample: 98 | avg: 60.06574999999998 99 | max: 71.202 100 | med: 59.515 101 | min: 55.203 102 | p(90): 63.559000000000005 103 | p(95): 65.21095 104 | ... 105 | ``` 106 | 107 | ### Ending your test 108 | If you wish to complete a test before the `duration` timeframe has been met, you will have to use the `Ctrl+C` keyboard command in the _terminal_ running your test or use the REST API: 109 | ```bash 110 | curl -X PATCH \ 111 | http://localhost:6565/v1/status \ 112 | -H 'Content-Type: application/json' \ 113 | -d '{ 114 | "data": { 115 | "attributes": { 116 | "stopped": true 117 | }, 118 | "id": "default", 119 | "type": "status" 120 | } 121 | }' 122 | ``` 123 | 124 | > The `k6` CLI does not provide an equivalent `k6 stop` command. Simply use `Ctrl+C` to end a test. 125 | 126 | ### Script options 127 | Our initial script provides the bare minimum to begin your test. 128 | 129 | Consider specifying your `maxVUs` . As noted previously, the `maxVUs` is not required. However, any attempt to scale will be met with an error unless a `--max` is specified with the initial scaling request. The `maxVUs` value may be overridden using the `--max` argument if deemed necessary. 130 | 131 | The final script option is the `vus` setting. When provided, this number of VUs will begin processing once the script is started thereby eliminating the need for an initial scale-up. 132 | 133 | Let's update the _options_ declaration in our _test.js_ script to include these additional settings: 134 | ```js 135 | export const options = { 136 | scenarios: { 137 | k6_workshop: { 138 | executor: 'externally-controlled', 139 | duration: '5m', 140 | maxVUs: 10, 141 | vus: 2, 142 | }, 143 | }, 144 | }; 145 | ``` 146 | 147 | Running our script now will immediately show that our tests are being executed by 2 virtual users: 148 | ```bash 149 | $ k6 run test.js 150 | 151 | /\ |‾‾| /‾‾/ /‾‾/ 152 | /\ / \ | |/ / / / 153 | / \/ \ | ( / ‾‾\ 154 | / \ | |\ \ | (‾) | 155 | / __________ \ |__| \__\ \_____/ .io 156 | 157 | execution: local 158 | script: scripts/test.js 159 | output: - 160 | 161 | scenarios: (100.00%) 1 scenario, 10 max VUs, 5m0s max duration (incl. graceful stop): 162 | * k6_workshop: Externally controlled execution with 2 VUs, 10 max VUs, 5m0s duration 163 | 164 | INFO[0000] [VU: 4, iteration: 0] Starting iteration... source=console 165 | INFO[0000] [VU: 1, iteration: 0] Starting iteration... source=console 166 | INFO[0003] [VU: 1, iteration: 1] Starting iteration... source=console 167 | INFO[0003] [VU: 4, iteration: 1] Starting iteration... source=console 168 | INFO[0006] [VU: 1, iteration: 2] Starting iteration... source=console 169 | INFO[0006] [VU: 4, iteration: 2] Starting iteration... source=console 170 | ``` 171 | 172 | ### Wrapping up 173 | So that's it! Hopefully, you see the additional power in being able to control your k6 load test from an external source! 174 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors/Per-VU-Iterations-Exercises.md: -------------------------------------------------------------------------------- 1 | # Per VU Iterations Executor 2 | 3 | As noted in [Setting load profiles with executors](../08-Setting-load-profiles-with-executors.md#Per-VU-Iterations), _Per VU Iterations_ is an executor with a focus on _iterations_ performed by a _virtual user (VU)_. 4 | 5 | ## Exercises 6 | 7 | For our exercises, we're going to start by using a very basic script that simply performs an HTTP request and then waits one second before completing the test iteration. We're providing some console output as things change. 8 | 9 | ### Creating our script 10 | 11 | Let's begin by implementing our test script. Create a file named _test.js_ with the following content: 12 | 13 | ```js 14 | import http from 'k6/http'; 15 | import { sleep } from 'k6'; 16 | 17 | export const options = { 18 | scenarios: { 19 | k6_workshop: { 20 | executor: 'per-vu-iterations', 21 | }, 22 | }, 23 | }; 24 | 25 | export default function () { 26 | console.log(`[VU: ${__VU}, iteration: ${__ITER}] Starting iteration...`); 27 | http.get('https://test.k6.io/contacts.php'); 28 | sleep(1); 29 | } 30 | ``` 31 | 32 | ### Initial test run 33 | 34 | We're starting with the bare minimum to use the executor, which only requires that we specify the `executor` itself. Now that we've defined our basic script, we'll go ahead and run k6: 35 | 36 | ```bash 37 | k6 run test.js 38 | ``` 39 | 40 | Looking at our results, you should see confirmation that we ran a single test iteration from a single virtual user. 41 | 42 | ```bash 43 | INFO[0000] [VU: 1, iteration: 0] Starting iteration... source=console 44 | 45 | running (00m01.3s), 0/1 VUs, 1 complete and 0 interrupted iterations 46 | k6_workshop ✓ [======================================] 1 VUs 00m01.3s/10m0s 1/1 iters, 1 per VU 47 | ``` 48 | 49 | ### Increase the virtual users 50 | 51 | By not specifying the number of virtual users for our test, the default will be a single user. As implied by the name, the number of VUs is a significant aspect of this executor. Let's increase the number of virtual users using the `vus` option so that we simulate 10 users. We'll update the `options` section of our test script: 52 | 53 | ```js 54 | export const options = { 55 | scenarios: { 56 | k6_workshop: { 57 | executor: 'per-vu-iterations', 58 | vus: 10, 59 | }, 60 | }, 61 | }; 62 | ``` 63 | 64 | Run the script again: 65 | 66 | ```bash 67 | k6 run test.js 68 | ``` 69 | 70 | Taking a look at the output, you'll now see that our test ran 10 iterations. In other words, _1 iteration per VU_. 71 | 72 | ```bash 73 | INFO[0000] [VU: 7, iteration: 0] Starting iteration... source=console 74 | INFO[0000] [VU: 2, iteration: 0] Starting iteration... source=console 75 | INFO[0000] [VU: 1, iteration: 0] Starting iteration... source=console 76 | INFO[0000] [VU: 5, iteration: 0] Starting iteration... source=console 77 | INFO[0000] [VU: 3, iteration: 0] Starting iteration... source=console 78 | INFO[0000] [VU: 4, iteration: 0] Starting iteration... source=console 79 | INFO[0000] [VU: 8, iteration: 0] Starting iteration... source=console 80 | INFO[0000] [VU: 9, iteration: 0] Starting iteration... source=console 81 | INFO[0000] [VU: 6, iteration: 0] Starting iteration... source=console 82 | INFO[0000] [VU: 10, iteration: 0] Starting iteration... source=console 83 | 84 | running (00m01.7s), 00/10 VUs, 10 complete and 0 interrupted iterations 85 | k6_workshop ✓ [======================================] 10 VUs 00m01.7s/10m0s 10/10 iters, 1 per VU 86 | ``` 87 | 88 | ### Change the iterations 89 | 90 | In the previous example, we had not specified how many iterations we desired, so k6 used the default value of 1 for each of the 10 VUs, therefore resulting in 10 iterations overall. Expanding on this, let's increase the `iterations` option to 20. Because the iterations are _per VU_, we should expect 200 (`10 VUs * 20 iterations`) total iterations for the test run. 91 | 92 | ```js 93 | export const options = { 94 | scenarios: { 95 | k6_workshop: { 96 | executor: 'per-vu-iterations', 97 | vus: 10, 98 | iterations: 20, 99 | }, 100 | }, 101 | }; 102 | ``` 103 | 104 | Once again, we'll execute the script with k6: 105 | 106 | ```bash 107 | k6 run test.js 108 | ``` 109 | 110 | As expected, our test ran 200 iterations in total. Let's look a little more closely at these results: 111 | 112 | ```bash 113 | INFO[0022] [VU: 1, iteration: 19] Starting iteration... source=console 114 | INFO[0022] [VU: 8, iteration: 19] Starting iteration... source=console 115 | INFO[0022] [VU: 9, iteration: 18] Starting iteration... source=console 116 | INFO[0022] [VU: 2, iteration: 18] Starting iteration... source=console 117 | INFO[0022] [VU: 3, iteration: 19] Starting iteration... source=console 118 | INFO[0022] [VU: 5, iteration: 19] Starting iteration... source=console 119 | INFO[0022] [VU: 6, iteration: 19] Starting iteration... source=console 120 | INFO[0022] [VU: 10, iteration: 19] Starting iteration... source=console 121 | INFO[0022] [VU: 4, iteration: 19] Starting iteration... source=console 122 | INFO[0022] [VU: 7, iteration: 18] Starting iteration... source=console 123 | INFO[0023] [VU: 9, iteration: 19] Starting iteration... source=console 124 | INFO[0023] [VU: 2, iteration: 19] Starting iteration... source=console 125 | INFO[0024] [VU: 7, iteration: 19] Starting iteration... source=console 126 | 127 | running (00m25.1s), 00/10 VUs, 200 complete and 0 interrupted iterations 128 | k6_workshop ✓ [======================================] 10 VUs 00m25.1s/10m0s 200/200 iters, 20 per VU 129 | 130 | ``` 131 | 132 | > :point_up: The iteration counter is 0-based, meaning a count of 19 is _actually_ 20 iterations. 133 | 134 | From the output, you should note that _VU #1_ completed early when compared to _VU #7_. In fact, _VU #7_ still had to complete 2 iterations **after** _VU #1_ was already finished. This can be considered _fair-share scheduling_; each VU performs the same amount of work. 135 | 136 | Similar to your early days in school, once you finished your exam you had to wait idly for the remainder of the class to finish or the school bell rang. Speaking of ringing the school bell... 137 | 138 | ### Setting a time limit 139 | 140 | So far, with our example, we've had ample time for our script to finish the desired iterations. Because we hadn't specified the `maxDuration`, k6 uses the default value of 10 minutes. 141 | 142 | We'll update our test script to include a `maxDuration` of 10 seconds: 143 | 144 | ```js 145 | export const options = { 146 | scenarios: { 147 | k6_workshop: { 148 | executor: 'per-vu-iterations', 149 | vus: 10, 150 | iterations: 20, 151 | maxDuration: '10s', 152 | }, 153 | }, 154 | }; 155 | ``` 156 | 157 | > :point_up: Durations are configured as string values comprised of a positive integer and a suffix representing the time unit. For example, "s" for seconds, "m" for minutes. 158 | 159 | As before, run the script with `k6 run test.js`. 160 | 161 | Allowing the script to run, we see that our script was not able to complete all iterations due to reaching the time limit. _Pencils down class!_ 162 | 163 | ```bash 164 | INFO[0010] [VU: 9, iteration: 9] Starting iteration... source=console 165 | INFO[0010] [VU: 6, iteration: 9] Starting iteration... source=console 166 | INFO[0010] [VU: 10, iteration: 9] Starting iteration... source=console 167 | 168 | running (11.0s), 00/10 VUs, 95 complete and 0 interrupted iterations 169 | k6_workshop ✓ [======================================] 10 VUs 10s 095/200 iters, 20 per VU 170 | ``` 171 | Our script was only able to complete 95 iterations within the allowable 10-second timeframe. Previous evidence shows the full 200 iterations typically complete in 25 seconds. 172 | 173 | We could use that information as a baseline to establish a _Service Level Agreement (SLA)_ for the service being tested; we'll account for some fluctuation by setting the `maxDuration` to 30 seconds. If tests are not able to complete in that timeframe, it's possible the service performance has degraded enough to warrant investigation. 174 | 175 | ### Wrapping up 176 | 177 | With this exercise, you should see how to run a very basic test and how you can control the number of iterations, virtual users, and even setting time limits. Additionally, you see that the distribution of tests amongst VUs is _fairly scheduled_ when using _Per VU Iterations_. 178 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors/Ramping-VUs-Exercises.md: -------------------------------------------------------------------------------- 1 | # Ramping VUs Executor 2 | 3 | As noted in [Setting load profiles with executors](../08-Setting-load-profiles-with-executors.md#Ramping-VUs), the _Ramping VUs_ executor primarily focuses on the number of _virtual users (VUs)_ within _stages_. 4 | 5 | ## Exercises 6 | 7 | For our exercises, we're going to start by using a very basic script that simply performs an HTTP request and then waits one second before completing the test iteration. We're providing some console output as things change. 8 | 9 | ### Creating our script 10 | 11 | Let's begin by implementing our test script. Create a file named _test.js_ with the following content: 12 | 13 | ```js 14 | import http from 'k6/http'; 15 | import { sleep } from 'k6'; 16 | 17 | export const options = { 18 | scenarios: { 19 | k6_workshop: { 20 | executor: 'ramping-vus', 21 | stages: [ 22 | { target: 10, duration: "30s" }, 23 | ], 24 | }, 25 | }, 26 | }; 27 | 28 | export default function () { 29 | console.log(`[VU: ${__VU}, iteration: ${__ITER}] Starting iteration...`); 30 | http.get('https://test.k6.io/contacts.php'); 31 | sleep(1); 32 | } 33 | ``` 34 | 35 | Our _"basic"_ script is a little more complicated than some of our previous examples. This is due to the need to define `stages` in addition to the `executor` itself. At least a single _stage_ is required, each of which includes a `target` and `duration` option, both of which are also required. 36 | 37 | ### Initial test run 38 | 39 | Let's go ahead and run our script using the k6 CLI: 40 | 41 | ```bash 42 | k6 run test.js 43 | ``` 44 | 45 | Looking at the output from the script, you should see that several virtual users (VUs) performed test iterations over a 30-second period. 46 | 47 | ```bash 48 | INFO[0029] [VU: 2, iteration: 18] Starting iteration... source=console 49 | INFO[0029] [VU: 7, iteration: 2] Starting iteration... source=console 50 | INFO[0029] [VU: 4, iteration: 21] Starting iteration... source=console 51 | INFO[0029] [VU: 6, iteration: 15] Starting iteration... source=console 52 | INFO[0030] [VU: 5, iteration: 12] Starting iteration... source=console 53 | 54 | running (0m31.0s), 00/10 VUs, 141 complete and 0 interrupted iterations 55 | k6_workshop ✓ [======================================] 00/10 VUs 30s 56 | ``` 57 | 58 | > :point_up: The iteration counter is 0-based, meaning a count of 12 is _actually_ 13 iterations. 59 | 60 | A closer inspection at the iteration counts may seem odd. The counts seem to be all over the board: _VU #7_ only performed 3 iterations, while _VU #4_ performed 22. As with the [Constant VUs](../08-Setting-load-profiles-with-executors.md#Constant-VUs) executor, each virtual user executes the `default function ()` continuously, so how can there be such a disparity? 61 | 62 | This disparity in iteration counts is due to the _ramping_ aspect of the executor. k6 will linearly scale up or down the number of running VUs to achieve the `target` number of VUs defined within the stage. The `duration` will determine how long the scaling with take place. 63 | 64 | Because we did not specify a `startVUs` option, k6 used the default value of 1. Therefore, our test started with a single VU, then _ramped up_ to 10 VUs over a period of 30 seconds. For this reason, we may then infer that _VU #7_ became active much later in the stage than _VU #4_. 65 | 66 | ```text 67 | VUs 68 | 69 | 10 | ....... 70 | | ......./ 71 | 5 | ......./ 72 | | ......./ 73 | 1 |......./ 74 | +---------------------------------------+ 30s 75 | S T A G E # 1 76 | ``` 77 | 78 | ### Inverse the slope 79 | 80 | Let's invert the slope of our test by changing our `startVUs` and the `target` for the stage. We'll update the `options` in our test script as follows: 81 | 82 | ```js 83 | export const options = { 84 | scenarios: { 85 | k6_workshop: { 86 | executor: 'ramping-vus', 87 | stages: [ 88 | { target: 1, duration: "30s" }, 89 | ], 90 | startVUs: 10, 91 | }, 92 | }, 93 | }; 94 | ``` 95 | ```bash 96 | INFO[0026] [VU: 3, iteration: 24] Starting iteration... source=console 97 | INFO[0027] [VU: 6, iteration: 25] Starting iteration... source=console 98 | INFO[0027] [VU: 7, iteration: 25] Starting iteration... source=console 99 | INFO[0028] [VU: 6, iteration: 26] Starting iteration... source=console 100 | INFO[0028] [VU: 7, iteration: 26] Starting iteration... source=console 101 | INFO[0029] [VU: 6, iteration: 27] Starting iteration... source=console 102 | INFO[0029] [VU: 7, iteration: 27] Starting iteration... source=console 103 | 104 | running (0m30.5s), 00/10 VUs, 168 complete and 0 interrupted iterations 105 | k6_workshop ✓ [======================================] 00/10 VUs 30s 106 | ``` 107 | 108 | This time, you can see the number of VUs performing a higher count of iterations diminishes over the duration. 109 | 110 | ```text 111 | VUs 112 | 113 | 10 |....... 114 | | \....... 115 | 5 | \....... 116 | | \....... 117 | 1 | \....... 118 | +---------------------------------------+ 30s 119 | S T A G E # 1 120 | ``` 121 | 122 | When _ramping down_ VUs as we did in this last exercise, k6 provides some flexibility when reclaiming resources. By default, VUs targeted for removal are given a 30-second grace period during which to complete the current iteration it may be working on. This grace period may be adjusted using the `gracefulRampDown` option to fine-tune the value if necessary. 123 | 124 | ### Simulate spikes in active users 125 | 126 | The real power of _ramping_ is the ability to simulate activity spikes. This can be done by defining multiple stages. 127 | 128 | ```text 129 | VUs 130 | 131 | 10 | . 132 | | / \ 133 | 5 | / \ 134 | | / \............ 135 | 1 |............./ 136 | +------------+---+---+------------+ 30s 137 | #1 #2 #3 #4 138 | ``` 139 | 140 | From our lovely _ASCII art_ diagram, we'll modify our script to run 4 stages to simulate a spike in virtual users. Let's alter our `startVUs`, first stage, and add new stages to our simulation: 141 | 142 | ```js 143 | export const options = { 144 | scenarios: { 145 | k6_workshop: { 146 | executor: 'ramping-vus', 147 | stages: [ 148 | { target: 1, duration: "12s" }, 149 | { target: 10, duration: "3s" }, 150 | { target: 3, duration: "3s" }, 151 | { target: 3, duration: "12s" }, 152 | ], 153 | startVUs: 1, 154 | }, 155 | }, 156 | }; 157 | ``` 158 | 159 | Run the script once more: 160 | 161 | ```bash 162 | k6 run test.js 163 | ``` 164 | 165 | Now, you should notice that the test starts slowly, then ramps up quickly as the spike occurs, then ramps down to a moderate pace until the end of the test. 166 | 167 | ```bash 168 | INFO[0028] [VU: 3, iteration: 26] Starting iteration... source=console 169 | INFO[0028] [VU: 2, iteration: 14] Starting iteration... source=console 170 | INFO[0028] [VU: 7, iteration: 15] Starting iteration... source=console 171 | INFO[0029] [VU: 3, iteration: 27] Starting iteration... source=console 172 | INFO[0029] [VU: 2, iteration: 15] Starting iteration... source=console 173 | INFO[0029] [VU: 7, iteration: 16] Starting iteration... source=console 174 | 175 | running (0m30.7s), 00/10 VUs, 80 complete and 0 interrupted iterations 176 | k6_workshop ✓ [======================================] 00/10 VUs 30s 177 | ``` 178 | 179 | ### Wrapping up 180 | 181 | With this exercise, you should be able to see the power of being able to ramp up and down the number of VUs in order to model your test activity. 182 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors/Shared-Iterations-Exercises.md: -------------------------------------------------------------------------------- 1 | # Shared Iterations Executor 2 | 3 | As noted in [Setting load profiles with executors](../08-Setting-load-profiles-with-executors.md#Shared-Iterations), _Shared Iterations_ is the most basic of the executors. As can be inferred from the name, the primary focus will be the number of _iterations_ for your test; this is the number of times your test function will be run. 4 | 5 | ## Exercises 6 | For our exercises, we're going to start by using a very basic script which simply performs an HTTP request then waits one second before completing the test iteration. We're providing some console output as things change. 7 | 8 | ### Creating our script 9 | Let's begin by implementing our test script. Create a file named _test.js_ with the following content: 10 | ```js 11 | import http from 'k6/http'; 12 | import { sleep } from 'k6'; 13 | 14 | export const options = { 15 | scenarios: { 16 | k6_workshop: { 17 | executor: 'shared-iterations', 18 | }, 19 | }, 20 | }; 21 | 22 | export default function () { 23 | console.log(`[VU: ${__VU}, iteration: ${__ITER}] Starting iteration...`); 24 | http.get('https://test.k6.io/contacts.php'); 25 | sleep(1); 26 | } 27 | ``` 28 | 29 | ### Initial test run 30 | We're starting with the bare-minimum to use the executor, which only requires that we specify the `executor` itself. Now that we've defined our basic script, we'll go ahead and run k6: 31 | 32 | ```bash 33 | k6 run test.js 34 | ``` 35 | 36 | Looking at our results, you should see confirmation that we truly ran a single test iteration. 37 | 38 | ```bash 39 | INFO[0000] [VU: 1, iteration: 0] Starting iteration... source=console 40 | 41 | running (00m02.3s), 0/1 VUs, 1 complete and 0 interrupted iterations 42 | k6_workshop ✓ [======================================] 1 VUs 00m02.3s/10m0s 1/1 shared iters 43 | ``` 44 | 45 | ### Change the iterations 46 | A test of one request doesn't provide for much, so let's fix that. Let's bump up the number of `iterations` our test will perform by modifying the `options` for the test. 47 | ```js 48 | export const options = { 49 | scenarios: { 50 | k6_workshop: { 51 | executor: 'shared-iterations', 52 | iterations: 200, 53 | }, 54 | }, 55 | }; 56 | ``` 57 | Once again, we'll execute the script with k6: 58 | ```bash 59 | k6 run test.js 60 | ``` 61 | 62 | Wow...this is taking some time. Is something wrong? No. Let's go ahead and interrupt this test by using `Ctrl+C` if it hasn't already finished. 63 | 64 | We're artificially adding time to our tests, e.g. `sleep(1)`, for illustrative purposes. Because we did not specify the number of `vus`, or virtual users (VUs), k6 is running as a single user performing the test one after another. For 200 iterations; with the artificial delay, our test would take over 3 minutes! 65 | 66 | ### Add a time limit 67 | When running a script locally, it's easy to see if something is taking longer than you expected then terminate the process. In an automated pipeline, the system may gladly wait for the process to complete without regard to the necessity of waiting or the cost it may present. 68 | 69 | For this, the executor does provide a `maxDuration` option to set a time limit on the test. If none is specified, the executor will use a 10-minute timeout by default, which may be too much, or too little. A best practice would be to specify the `maxDuration` based upon your best guess or based upon previous testing. 70 | 71 | We'll update our test script to include a `maxDuration` of 30 seconds: 72 | ```js 73 | export const options = { 74 | scenarios: { 75 | k6_workshop: { 76 | executor: 'shared-iterations', 77 | iterations: 200, 78 | maxDuration: '30s', 79 | }, 80 | }, 81 | }; 82 | ``` 83 | > :point_up: Durations are configured as string values comprised of a positive integer and a suffix representing the time unit. For example, "s" for seconds, "m" for minutes. 84 | 85 | As before, run the script with `k6 run test.js`. 86 | 87 | Allowing the script to run, we see that our script was not able to complete all iterations due to reaching the time limit. 88 | ```bash 89 | INFO[0029] [VU: 1, iteration: 25] Starting iteration... source=console 90 | INFO[0030] [VU: 1, iteration: 26] Starting iteration... source=console 91 | 92 | running (0m30.9s), 0/1 VUs, 27 complete and 0 interrupted iterations 93 | k6_workshop ✗ [====>---------------------------------] 1 VUs 30.9s/30s 027/200 shared iters 94 | ``` 95 | Our script was only able to complete 27 iterations within the allowable 30-second timeframe. 96 | 97 | ### Change the concurrency 98 | So far, our test has only utilized a single virtual user, or VU. We can update the `vus` option to increase the number of requests being performed simultaneously. Let's change the `vus` to simulate 10 users. Once again, we'll update the `options` section of our test script: 99 | ```js 100 | export const options = { 101 | scenarios: { 102 | k6_workshop: { 103 | executor: 'shared-iterations', 104 | iterations: 200, 105 | maxDuration: '30s', 106 | vus: 10, 107 | }, 108 | }, 109 | }; 110 | ``` 111 | Run the script again: 112 | ```bash 113 | k6 run test.js 114 | ``` 115 | The test should now perform the same number of iterations, but in much less time due to the number of concurrent requests. 116 | ```bash 117 | INFO[0020] [VU: 7, iteration: 17] Starting iteration... source=console 118 | ... 119 | INFO[0020] [VU: 2, iteration: 17] Starting iteration... source=console 120 | ... 121 | INFO[0020] [VU: 6, iteration: 19] Starting iteration... source=console 122 | INFO[0020] [VU: 1, iteration: 18] Starting iteration... source=console 123 | INFO[0021] [VU: 3, iteration: 19] Starting iteration... source=console 124 | INFO[0021] [VU: 5, iteration: 20] Starting iteration... source=console 125 | INFO[0021] [VU: 4, iteration: 20] Starting iteration... source=console 126 | INFO[0021] [VU: 8, iteration: 20] Starting iteration... source=console 127 | INFO[0021] [VU: 9, iteration: 20] Starting iteration... source=console 128 | INFO[0021] [VU: 10, iteration: 20] Starting iteration... source=console 129 | 130 | running (0m22.6s), 00/10 VUs, 200 complete and 0 interrupted iterations 131 | k6_workshop ✓ [======================================] 10 VUs 22.6s/30s 200/200 shared iters 132 | ``` 133 | Let's look at these results a bit more closely. The above results (trimmed for brevity) shows the final report for each of the running VUs. 134 | 135 | > :point_up: The iteration counter is 0-based, meaning a count of 20 is _actually_ 21 iterations. 136 | 137 | The behavior of the `shared-iterations` executor may become apparent: _some VUs performed more tests than others_. This is the "shared" aspect in the executor name. As each VU completes a test iteration, they immediately reserve another while there are iterations remaining. If a VU continually gets quick responses from the service being tested, it's possible, as in this case, that the VU gets more than its _fair share_. 138 | 139 | ### Wrapping up 140 | With this exercise, you should see how to run a very basic test and how you can control the number of iterations, concurrency, and even setting time limits. Additionally, you see that the distribution of tests amongst VUs may not be _fair_. 141 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/09-Workload-modeling-with-scenarios.md: -------------------------------------------------------------------------------- 1 | # Workload modeling with scenarios 2 | 3 | In [Workload modeling](03-Workload-modeling.md), we mentioned that one of the challenges is that user flows can be complex. A single test may need to cover multiple end-to-end flows simultaneously. For example, in a storefront application, we may have several members browsing products while many others (hopefully!) are adding items to their cart. To help organize these various flows, the k6 team has created [scenarios](https://k6.io/docs/using-k6/scenarios/). 4 | 5 | With _scenarios_, script authors may create distinct flows having different execution patterns and environment settings. Scenarios can be executed in parallel or staggered using time offsets with the `startTime` option. 6 | 7 | By breaking down each of your flows into a different scenario, your overarching script can be more streamlined with logic that is easier to follow. From our storefront example, one scenario would be users populating a shopping cart and placing orders. Another scenario would represent other customers who may be searching for the availability of certain products. Each workflow could be complex; keeping them separate makes script maintenance easier. 8 | 9 | ## Defining scenarios 10 | 11 | Each _scenario_ includes the same [elements of a workload model](03-Workload-modeling.md#Elements-of-a-workload-model) in addition to the `executor`, `startTime`, and optionally `env` variables and `tags` exclusive to the _scenario_. Check the complete list of scenario options in the [documentation](https://k6.io/docs/using-k6/scenarios/). 12 | 13 | Save the following script as `playground.js`. 14 | 15 | ```javascript 16 | import { sleep } from 'k6'; 17 | 18 | export const options = { 19 | scenarios: { 20 | // Scenario 1: Users browsing through our product catalog 21 | users_browsing_products: { 22 | executor: 'shared-iterations', // name of the executor to use 23 | env: { WORKFLOW: 'browsing' }, // environment variable for the workflow 24 | 25 | // executor-specific configuration 26 | vus: 3, 27 | iterations: 10, 28 | maxDuration: '10s', 29 | }, 30 | // Scenario 2: Users adding items to their cart and checking out 31 | users_buying_products: { 32 | executor: 'per-vu-iterations', // name of the executor to use 33 | exec: 'usersBuyingProducts', // js function for the purchasing workflow 34 | env: { WORKFLOW: 'buying' }, // environment variable for the workflow 35 | startTime: '2s', // delay start of purchasing workflow 36 | 37 | // executor-specific configuration 38 | vus: 3, 39 | iterations: 1, 40 | maxDuration: '10s', 41 | }, 42 | }, 43 | }; 44 | 45 | export default function () { 46 | // Model your workflow for searching through products 47 | console.log(`[VU: ${__VU}, iteration: ${__ITER}, workflow: ${__ENV.WORKFLOW}] Just looking!!!`); 48 | sleep(1); 49 | } 50 | 51 | export function usersBuyingProducts() { 52 | // Model your workflow for adding items to cart and checking out 53 | console.log(`[VU: ${__VU}, iteration: ${__ITER}, workflow: ${__ENV.WORKFLOW}] CHA-CHING!!!`); 54 | sleep(1); 55 | } 56 | ``` 57 | 58 | Running the script as `k6 run playground.js` will produce results similar to the following: 59 | 60 | ```plain 61 | $ k6 run playground.js 62 | 63 | /\ |‾‾| /‾‾/ /‾‾/ 64 | /\ / \ | |/ / / / 65 | / \/ \ | ( / ‾‾\ 66 | / \ | |\ \ | (‾) | 67 | / __________ \ |__| \__\ \_____/ .io 68 | 69 | execution: local 70 | script: playground.js 71 | output: - 72 | 73 | scenarios: (100.00%) 2 scenarios, 6 max VUs, 42s max duration (incl. graceful stop): 74 | * users_browsing_products: 10 iterations shared among 3 VUs (maxDuration: 10s, gracefulStop: 30s) 75 | * users_buying_products: 1 iterations for each of 3 VUs (maxDuration: 10s, exec: usersBuyingProducts, startTime: 2s, gracefulStop: 30s) 76 | 77 | INFO[0000] [VU: 4, iteration: 0, workflow: browsing] Just looking!!! source=console 78 | INFO[0000] [VU: 1, iteration: 0, workflow: browsing] Just looking!!! source=console 79 | ... 80 | INFO[0002] [VU: 1, iteration: 2, workflow: browsing] Just looking!!! source=console 81 | INFO[0002] [VU: 5, iteration: 0, workflow: buying] CHA-CHING!!! source=console 82 | INFO[0003] [VU: 4, iteration: 3, workflow: browsing] Just looking!!! source=console 83 | 84 | running (04.0s), 0/6 VUs, 13 complete and 0 interrupted iterations 85 | users_browsing_products ✓ [======================================] 3 VUs 04.0s/10s 10/10 shared iters 86 | users_buying_products ✓ [======================================] 3 VUs 01.0s/10s 3/3 iters, 1 per VU 87 | ``` 88 | 89 | Viewing the results, you should notice that users are browsing products, then after an initial delay, we have another user making purchases. 90 | 91 | ## Configuration options 92 | 93 | Our example used a minimal number of configuration options for the _scenarios_, but the available options include the following: 94 | 95 | > Remember that additional options may be required depending upon the `executor` specified. See [setting load profiles with executors](08-Setting-load-profiles-with-executors.md) for more about _executors_. 96 | 97 | | Option | Description | Default | 98 | |----------------|---------------------------------------------------------------------------------------------------|--------------| 99 | | **`executor`** | Specific [executor](https://k6.io/docs/using-k6/scenarios/executors/) to "shape" traffic patterns | - (required) | 100 | | `env` | Environment variables specific to the scenario | `{}` | 101 | | `exec` | Name of the exported JS function to execute for the scenario | `"default"` | 102 | | `gracefulStop` | Time period to allow an iteration to complete before being forcefully terminated | `"30s"` | 103 | | `startTime` | Time offset within the test to begin the scenario | `"0s"` | 104 | | `tags` | Tags specific to the scenario | `{}` | 105 | 106 | Looking more closely at our example, we created two scenarios: `users_browsing_products` and `users_buying_products`. 107 | 108 | The idea for the `users_browsing_products` scenario is to create some _background activity_ for our test. We use the `shared-iterations` executor to simulate three users browsing the product catalog. We're limiting this activity to 10 iterations and ensure our script lasts no longer than 10 seconds. The `env` option sets the `WORKFLOW` environment variable to `browsing`, which can be used within our JavaScript to provide some execution context. Because we are _not_ specifying the `exec` option, our scenario will execute whichever JavaScript function is the `default`. 109 | 110 | `users_buying_products` is our second scenario to simulate three different users making a purchase. This scenario uses the `per-vu-iterations` to control the activity. The scenario specifies `exec: 'usersBuyingProducts'` to call our purchasing logic for each scenario iteration. As with the previous scenario, we inject the `WORKFLOW` environment variable to provide the script with some context for the scenario. This time, however, we specify `startTime: '2s'` to define a delay for the scenario to begin; this gives the `users_browsing_products` a "head start" to create our background activity before we start having users making purchases. 111 | 112 | ## Test your knowledge 113 | 114 | ### Question 1 115 | _True_ or _False_, you can think of scenarios as layers of activity within a more extensive test? 116 | 117 | ### Question 2 118 | _True_ or _False_, you must specify the `exec` option for each scenario? 119 | 120 | ### Question 3 121 | _True_ or _False_, the `exec` option determines traffic _shape_ for a scenario? 122 | 123 | ### Answers 124 | 1. True. Each scenario acts independently yet can be affected by other scenarios. From our example, we defined user browsing activity in the _background_, then having users making purchases in the _foreground_. 125 | 2. False. While explicitly configuring the `exec` option would be advisable, it is _not_ required. Multiple scenarios configured without the `exec` option result in each executing the same `default` function, which may be confusing. 126 | 3. False. The `executor` ultimately defines the pattern of the load profile for your scenario. The `exec` denotes which JavaScript function executes for each iteration of your scenario. 127 | -------------------------------------------------------------------------------- /Modules/III-k6-Intermediate/11-Creating-and-using-custom-metrics.md: -------------------------------------------------------------------------------- 1 | # Creating and using custom metrics 2 | 3 | In [Understanding k6 results](../II-k6-Foundations/03-Understanding-k6-results.md), you learned how to interpret the metrics that k6 reports by default. But what if you want to measure something else that k6 doesn't keep track of? 4 | 5 | You can create your own custom metrics within your script. Once you've defined what you want to measure, k6 collects the measurements during the test and reports your custom metrics along with the built-in ones in the end-of-test summary report. 6 | 7 | ## Types of metrics 8 | 9 | There are [four types](https://k6.io/docs/using-k6/metrics/#metric-types) of [custom metrics](https://k6.io/docs/using-k6/metrics/#custom-metrics) that you can create, and each one measures a different kind of data. 10 | 11 | ### Counter 12 | 13 | A counter is an integer that can be incremented whenever a certain event happens. Counters are good for keeping track of occurrences like: 14 | - the number of times a specific error occurred 15 | - user accounts that have access to Admin functions 16 | - how often a request is redirected 17 | - visits of a specific page, in case your script accesses randomly determined pages 18 | 19 | ### Gauge 20 | 21 | A gauge metric stores three numbers: the minimum value, the maximum value, and the _last_ value. Gauges are useful to watch in real-time for tracking things that are relevant only while the test is running. You could use gauges to measure: 22 | - the size of response bodies returned 23 | - a custom version of the response time (for example, the response time of three ungrouped URLs) 24 | - the number of unprocessed applications, in a test involving scenarios that create applications and process them simultaneously 25 | 26 | ### Rate 27 | 28 | A rate metric tracks a boolean value over the total number of measurements taken. At the end of the test, the rate expresses the percentage of all the measurements that were `true` and the percentage of them that were `false`. Rate metrics can be used to measure: 29 | - what percentage of the VUs performed the checkout process for a product 30 | - the proportion of pages where an ad appeared 31 | - the percentage of user accounts with insufficient permissions 32 | 33 | ### Trend 34 | 35 | A trend metric measures minimum, maximum, average, and percentile statistics. Unlike gauges, trends keep _all_ values and report the distribution of all the values at the end of the test. Trends are similar to how `http_req_duration` (response time) is measured in k6, and they can be used to track: 36 | - custom timers, such as one where the think time or sleep is added to the response time 37 | - the number of retries executed before the correct response is returned (such as when simulating polling behavior) 38 | - the number of active user accounts reported to an admin user account 39 | 40 | ## Example: Custom timers 41 | 42 | A common reason to use custom metrics is to set up a timer for specific actions that you specify. For example, k6 automatically removes sleep times in the calculation for response time, but it may sometimes be necessary to keep track of the total time that a VU spends on a part of the script, including the sleep time. 43 | 44 | Here's a short script you can use to create a custom timer: 45 | 46 | ```js 47 | import http from 'k6/http'; 48 | import { sleep, check } from 'k6'; 49 | import { Trend } from 'k6/metrics'; 50 | 51 | const transactionDuration = new Trend('transaction_duration'); 52 | 53 | export default function () { 54 | 55 | // This is a transaction 56 | let timeStart = Date.now(); 57 | let res = http.get('https://test.k6.io', {tags: { name: '01_Home' }}); 58 | check(res, { 59 | 'is status 200': (r) => r.status === 200, 60 | }); 61 | sleep(Math.random() * 5); 62 | let timeEnd = Date.now(); 63 | transactionDuration.add(timeEnd - timeStart); 64 | console.log('The transaction took', timeEnd - timeStart, 'ms to complete.'); 65 | 66 | // This is another transaction 67 | res = http.get('https://test.k6.io/about', {tags: { name: '02_About' }}); 68 | } 69 | ``` 70 | 71 | First, import the metric type required. 72 | 73 | ```js 74 | import { Trend } from 'k6/metrics'; 75 | ``` 76 | 77 | In this case, the Trend metric seems most appropriate, so that k6 will report full statistics on your new timer. 78 | 79 | Second, declare your timer. 80 | 81 | ```js 82 | const transactionDuration = new Trend('transaction_duration'); 83 | ``` 84 | 85 | Third, determine where to start and stop the timer. 86 | 87 | ```js 88 | let timeStart = Date.now(); 89 | let timeEnd = Date.now(); 90 | ``` 91 | 92 | Place these lines strategically in your script so that the execution of all code between `timeStart` and `timeEnd` is included in the elapsed time you want to measure. 93 | 94 | Finally, add the elapsed time to your timer. 95 | 96 | ```js 97 | transactionDuration.add(timeEnd - timeStart); 98 | ``` 99 | 100 | When you run the script, you'll see something like this: 101 | 102 | ```plain 103 | checks.........................: 100.00% ✓ 1 ✗ 0 104 | data_received..................: 17 kB 5.3 kB/s 105 | data_sent......................: 621 B 194 B/s 106 | http_req_blocked...............: avg=320.48ms min=15µs med=320.48ms max=640.95ms p(90)=576.86ms p(95)=608.9ms 107 | http_req_connecting............: avg=59.06ms min=0s med=59.06ms max=118.12ms p(90)=106.31ms p(95)=112.21ms 108 | http_req_duration..............: avg=221.24ms min=128.58ms med=221.24ms max=313.89ms p(90)=295.36ms p(95)=304.63ms 109 | { expected_response:true }...: avg=128.58ms min=128.58ms med=128.58ms max=128.58ms p(90)=128.58ms p(95)=128.58ms 110 | http_req_failed................: 50.00% ✓ 1 ✗ 1 111 | http_req_receiving.............: avg=71µs min=55µs med=71µs max=87µs p(90)=83.8µs p(95)=85.4µs 112 | http_req_sending...............: avg=546µs min=12µs med=546µs max=1.08ms p(90)=973.2µs p(95)=1.02ms 113 | http_req_tls_handshaking.......: avg=258.3ms min=0s med=258.3ms max=516.6ms p(90)=464.94ms p(95)=490.77ms 114 | http_req_waiting...............: avg=220.62ms min=127.42ms med=220.62ms max=313.83ms p(90)=295.19ms p(95)=304.51ms 115 | http_reqs......................: 2 0.623378/s 116 | iteration_duration.............: avg=3.2s min=3.2s med=3.2s max=3.2s p(90)=3.2s p(95)=3.2s 117 | iterations.....................: 1 0.311689/s 118 | transaction_duration...........: avg=2893 min=2893 med=2893 max=2893 p(90)=2893 p(95)=2893 119 | vus............................: 1 min=1 max=1 120 | vus_max........................: 1 min=1 max=1 121 | ``` 122 | 123 | Your custom metric, `transaction_duration`, is reported along with the other built-in k6 metrics. 124 | 125 | ## Test your knowledge 126 | 127 | ### Question 1 128 | 129 | In which of the following situations might it be appropriate to use a custom metric in your k6 script? 130 | 131 | - [ ] A: When you want to know the response time of a certain URL 132 | 133 | - [ ] B: When you are asked to report how many requests were executed in total during a test 134 | 135 | - [ ] C: When you want to measure something that the built-in metrics don't cover 136 | 137 | ### Question 2 138 | 139 | Which type of custom metric would be best for tracking how many opened forms had a certain checkbox ticked? 140 | 141 | - [ ] A: Trend 142 | 143 | - [ ] B: Counter 144 | 145 | - [ ] C: Gauge 146 | 147 | ### Question 3 148 | 149 | How can you get a report on custom metrics after a test? 150 | 151 | - [ ] A: All custom metrics are displayed in the end-of-test summary report. 152 | 153 | - [ ] B: You have to log in to k6 Cloud to see the results. 154 | 155 | - [ ] C: Only rate and trend metric results are shown in the end-of-test summary report. 156 | 157 | ### Answers 158 | 159 | 1. **C.** Both A and B describe metrics that already exist by default. Custom metrics are for creating and measuring things beyond what comes with k6 out of the box. 160 | 161 | 2. **B.** A counter would be best for tracking the number of occurrences of an event during the test. 162 | 163 | 3. **A.** Custom metrics are displayed at the end-of-test summary report, although they are *also* available in k6 Cloud if you use it. C is false. 164 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Additional-protocols.md: -------------------------------------------------------------------------------- 1 | There are multiple other ways to test with k6, relying more or less on the HTTP protocol. However, these additional use cases are, with the exception of websockets and gRPC, not included in the default k6 binary but has to be added to a bespoke binary created with the xk6 tool. 2 | 3 | Some additional clients available as xk6 extension are: 4 | 5 | - [Kafka]() 6 | - [SQL]() 7 | - [Prometheus Remote Write] -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Advanced-curriculum.md: -------------------------------------------------------------------------------- 1 | # Advanced curriculum 2 | 3 | These are more advanced topics that we could consider creating modules around for further study. 4 | 5 | ### Performance testing principles 6 | 7 | #### Theory 8 | - [Performance testing methodologies](Performance-testing-methodologies.md) 9 | - [Performance automation](Performance-automation.md) 10 | - [The automation pyramid](The-automation-pyramid.md) 11 | - [Performance test cases](Performance-test-cases.md) 12 | 13 | #### Planning 14 | - [Clarifying testing criteria](Clarifying-testing-criteria.md) 15 | - What to test: Determining scope 16 | - Where to test: environments 17 | - Monitoring, instrumentation, and observability 18 | - Who does the testing? 19 | - The architecture of a load testing stack: diagram and explanation 20 | 21 | #### Writing load testing scripts 22 | - [Choosing a load testing tool](Choosing-a-load-testing-tool.md) 23 | - [Load tests as code and shift-left testing](Load-tests-as-code-and-shift-left-testing.md) 24 | 25 | #### Executing load tests 26 | - [Parameters of a load test](Parameters-of-a-load-test.md) 27 | - On-premise vs cloud execution 28 | - Analyzing load testing results and reporting 29 | - Important performance testing metrics 30 | - Continuous load testing 31 | - [What is continuous load testing and why should you do it](What-is-continuous-load-testing-and-why-should-you-do-it.md) 32 | - CI/CD tools 33 | 34 | ### k6 OSS 35 | 36 | #### Testing framework 37 | - Handling cookies 38 | - Source control (git) 39 | - Connection reuse options 40 | - [Caching options](Caching-options.md) 41 | - [Additional protocols](Additional-protocols.md) 42 | - [Modular scripting](Modular-scripting.md) - Framework for a complex test (multiple scripts calling each other) 43 | - Mocks and stubs 44 | - Creating custom summary output 45 | - [Using a proxy with k6](Using-a-proxy-with-k6.md) 46 | 47 | #### Extending k6 48 | - xk6 extensions 49 | - [How to use xk6-browser](How-to-use-xk6-browser.md) 50 | - [How to use the k6 operator for Kubernetes](How-to-use-the-k6-operator-for-Kubernetes.md) 51 | - [How to do chaos testing with k6](How-to-do-chaos-testing-with-k6.md) 52 | - [How to make a k6 extension](How-to-make-a-k6-extension.md) 53 | - Goja debugger 54 | - k6 jslib 55 | - Custom summary output 56 | - Custom end-of-test summary (`handleSummary`) 57 | 58 | #### Continuous testing 59 | - CI/CD tools 60 | - [GitHub Actions](GitHub-Actions.md) 61 | - [GitLab](GitLab.md) 62 | - [Azure DevOps](Azure-DevOps.md) 63 | - [Circle CI](Circle-CI.md) 64 | 65 | #### Observability 66 | - [Observability and performance testing](Observability-and-performance-testing.md) 67 | - [Integrating k6 with Grafana and Prometheus](Integrating-k6-with-Grafana-and-Prometheus.md) 68 | - [k6 and Netdata](k6-and-Netdata.md) 69 | - [k6 and New Relic](k6-and-New-Relic.md) 70 | 71 | #### [k6 Use Cases](k6-Use-Cases.md) 72 | 73 | ### k6 Cloud 74 | 75 | #### Foundations 76 | - [k6 OSS vs k6 Cloud](k6-OSS-vs-k6-Cloud.md) differences and use cases 77 | - [Using k6 OSS with k6 Cloud](Using-k6-OSS-with-k6-Cloud.md) 78 | - [Overview of k6 Cloud](Overview-of-k6-Cloud.md) ==Mark== 79 | - [The k6 Cloud interface](The-k6-Cloud-interface.md) ==Mark== 80 | - [Creating a script using the Test Builder](Creating-a-script-using-the-Test-Builder.md) 81 | - Choosing different availability zones 82 | - [Choosing a load profile on k6 Cloud](Choosing-a-load-profile-on-k6-Cloud.md) 83 | - [Analyzing results on k6 Cloud](Analyzing-results-on-k6-Cloud.md) ==Mark== 84 | - [Continuous load testing with k6 Cloud](Continuous-load-testing-with-k6-Cloud.md) 85 | 86 | #### Intermediate 87 | - Custom dashboards 88 | - Performance insights 89 | - Threshold list 90 | - Performance over time 91 | - Baseline tests 92 | - Test comparison 93 | - Test scheduling 94 | - Shareable links 95 | - PDF export 96 | ### End-to-end testing example 97 | 98 | Apply as much of the above as possible to a single example, and show what we've done. 99 | 100 | - QuickPizza? 101 | 102 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Analyzing-results-on-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Azure-DevOps.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Best-practices-for-designing-realistic-k6-scripts.md: -------------------------------------------------------------------------------- 1 | # {{title}} 2 | 3 | 4 | 5 | ## Test your knowledge 6 | 7 | ### Question 1 8 | 9 | 10 | 11 | A: 12 | 13 | B: 14 | 15 | C: 16 | 17 | ### Question 2 18 | 19 | 20 | 21 | A: 22 | 23 | B: 24 | 25 | C: 26 | 27 | ### Question 3 28 | 29 | 30 | 31 | A: 32 | 33 | B: 34 | 35 | C: 36 | 37 | ### Answers 38 | 39 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Caching-options.md: -------------------------------------------------------------------------------- 1 | # Caching options 2 | 3 | 4 | 5 | ## Test your knowledge 6 | 7 | ### Question 1 8 | 9 | 10 | 11 | A: 12 | 13 | B: 14 | 15 | C: 16 | 17 | ### Question 2 18 | 19 | 20 | 21 | A: 22 | 23 | B: 24 | 25 | C: 26 | 27 | ### Question 3 28 | 29 | 30 | 31 | A: 32 | 33 | B: 34 | 35 | C: 36 | 37 | ### Answers 38 | 39 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Choosing-a-load-profile-on-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Choosing-a-load-testing-tool.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### JMeter 4 | 5 | 6 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Circle-CI.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Continuous-load-testing-with-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Creating-a-script-using-the-Test-Builder.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/GitHub-Actions.md: -------------------------------------------------------------------------------- 1 | # GitHub Actions 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/GitLab.md: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/How-to-do-chaos-testing-with-k6.md: -------------------------------------------------------------------------------- 1 | # How to do chaos testing with k6 2 | 3 | 4 | 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/How-to-make-a-k6-extension.md: -------------------------------------------------------------------------------- 1 | # How to make a k6 extension 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/How-to-use-the-k6-operator-for-Kubernetes.md: -------------------------------------------------------------------------------- 1 | # How to use the k6 operator for Kubernetes 2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/How-to-use-xk6-browser.md: -------------------------------------------------------------------------------- 1 | ## Frontend testing 2 | 3 | 4 | 5 | 6 | ## How to use xk6-browser 7 | 8 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Integrating-k6-with-Grafana-and-Prometheus.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Integrating k6 with Grafana and Prometheus 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ## Prometheus Remote Write 13 | 14 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Load-tests-as-code-and-shift-left-testing.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Modular-scripting.md: -------------------------------------------------------------------------------- 1 | # {{title}} 2 | 3 | 4 | 5 | ## Test your knowledge 6 | 7 | ### Question 1 8 | 9 | 10 | 11 | A: 12 | 13 | B: 14 | 15 | C: 16 | 17 | ### Question 2 18 | 19 | 20 | 21 | A: 22 | 23 | B: 24 | 25 | C: 26 | 27 | ### Question 3 28 | 29 | 30 | 31 | A: 32 | 33 | B: 34 | 35 | C: 36 | 37 | ### Answers 38 | 39 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Observability-and-performance-testing.md: -------------------------------------------------------------------------------- 1 | 2 | ### What does observability have to do with performance testing? 3 | 4 | 5 | ### Trends in observability 6 | 7 | 8 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Overview-of-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | # Overview of k6 Cloud 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Parameters-of-a-load-test.md: -------------------------------------------------------------------------------- 1 | A test parameter describes the situation under which traffic is generated during the load test. Below are some common test parameters: 2 | 3 | - Transaction distribution 4 | - User flows 5 | - Ramp-up and ramp-down periods 6 | - Steady state 7 | - Duration 8 | - Concurrency 9 | - Throughput 10 | - Load profile 11 | 12 | We'll discuss each one in more detail below. 13 | 14 | ## Transaction distribution 15 | 16 | A [transaction](Performance-Testing-Terminology.md#Transaction) is a grouping of requests or steps that are triggered by a single user action. For example, a login transaction might consist of the following steps: 17 | - Type in username 18 | - Type in password 19 | - Click LOG IN button. 20 | 21 | The same transaction might consist of the following underlying requests: 22 | - HTTP POST to the authentication server with the login credentials 23 | - A redirect to the authenticated user's account page 24 | - HTTP GET for the HTML and embedded resources for the user's account page 25 | 26 | In load testing, transactions are used to organize scripts in a user-oriented way, where one transaction corresponds to one user action. 27 | 28 | A load test typically consists of multiple transactions, depending on the situation that you want to simulate. During test execution, you can set how often each transaction is executed, usually using the number of transactions per second or percentages. For example, consider a theoretical transaction distribution for an ecommerce site: 29 | 30 | - Home page: 100% 31 | - View Product List: 50% 32 | - View Product Page: 25% 33 | - Add to Cart: 15% 34 | - Checkout: 10% 35 | - Contact Us Page: 3% 36 | - (exit) 47% 37 | 38 | In this example, all of the users accessing a site go to the home page, but only 10% of them end up purchasing a product. Each line above (except for the 47% that exited the application after viewing the home page) is a transaction. 39 | 40 | The transaction distribution for a load test describes: 41 | - which transactions are included in the test 42 | - how often the included transactions are executed 43 | 44 | In k6, the best way to represent transactions is by using [groups](https://k6.io/docs/using-k6/tags-and-groups#groups). Groups combine multiple requests together and calculate a combined response time for those requests as well. 45 | 46 | ## User flows 47 | 48 | A user flow is the description of the path through an application that you're trying to test. Here are some examples of user flows you could create based on the list of transactions above: 49 | 50 | - Just browsing: Home Page > View Product List > View Product Page > View Product List > View Product Page 51 | - Clicking product link in email: View Product Page > Add to Cart > Checkout 52 | - Returning to checkout: Home Page > View Cart > Checkout 53 | 54 | User flows are sequences of transactions that represent actions a user might take as they use your application. In k6, you can [create multiple user flows as part of scenarios and run them in the same script](https://k6.io/docs/using-k6/scenarios/). 55 | 56 | ## Ramp-up and ramp-down periods 57 | 58 | Ramp-up is the amount of time it takes for a load test to go from 0 users to the desired number of users, and is found at the beginning of a test. Ramp-down is the time it takes for a load test to go from the desired number of users back down to 0, and is found at the end of a test. Ramp-up and ramp-down periods simulate real traffic against an application. In many production environments, users do not start accessing a site simultaneously, and instead gradually trickle into the application, and away from it, over some time. 59 | 60 | ![](load_profile-ramp-up.png.png) 61 | _Load test with ramp-up period highlighted_ 62 | 63 | In the load test above, the highlighted portions show the ramp-up period, which is 5 minutes. During the first 5 minutes of the test, the script gradually increases the number of virtual users from 0 to 20. 64 | 65 | The same test also has a ramp-down period, but it is significantly shorter, at 1 minute. Ramp-up periods and ramp-down periods do not have to be the same length, and it is valid to have one, none, or both. 66 | 67 | ```ad-tip 68 | Ramp-up and ramp-down periods are best used for applications that do not have a hard start or hard end to production traffic. In some situations, such as when simulating traffic for a sale that begins at a specific time, it may be more realistic to remove ramp-up and/or ramp-down periods. 69 | ``` 70 | 71 | ## Steady state 72 | 73 | The steady state is any period of time during the load test where the number of virtual users is constant. The steady state does not include ramp-up or ramp-down periods. The unchanging load during a steady state makes it an ideal time period for analyzing how well an application responds to the conditions created by a load test. 74 | 75 | ![Steady state in a load test](images/load_profile-steady_state.png) 76 | _Steady state during a load test_ 77 | 78 | ## Duration 79 | 80 | The duration of a load test is how long it takes to complete, from beginning to end. It includes the ramp-up, ramp-down, and steady state portions of a load test. 81 | 82 | ## Concurrency 83 | 84 | User concurrency is the number of users that are accessing an application at a given point in time. In load tests, this is measured in terms of the total virtual users. 85 | 86 | ## Throughput 87 | 88 | Throughput is a general term for the rate at which something is produced or processed. In the context of load testing, it is often used to refer to the rate at which requests are sent to the application server. A common measure of throughput is requests per second (rps), which is also what k6 measures. The higher the throughput of a test, the more concurrent requests the application server needs to process at any given time. 89 | 90 | ## Load profile 91 | 92 | A load profile describes the shape of the traffic generated over a certain amount of time. The load profile you decide on for your test is determined by the scenario you're trying to recreate. 93 | 94 | Load profiles are typically described by plotting the number of virtual users over time. 95 | 96 | ### Simple load profile 97 | 98 | ![Simple load profile for a load test](images/load_profile-no_ramp-up_or_ramp-down.png) 99 | _Simple load profile for a load test_ 100 | 101 | The simplest load profile for a load test involves the script executing a given number of virtual users and iterating on the script over a period of time. 102 | 103 | For example, in the screenshot above, the script started 20 VUs. When the end of the script is executed, each VU goes back to the beginning of the script and executes it again. The test ran in this way for 35 minutes, after which k6 terminated all the users. 104 | 105 | ```ad-note 106 | In k6, the `default` function is the part that is repeated throughout the test. You can nest other functions within this function or choose to iterate a different function instead by using [the exec option within a scenario](https://k6.io/docs/using-k6/scenarios/#common-options). 107 | ``` 108 | 109 | ### Constant load profile with ramps 110 | 111 | Sometimes, the load profile above may be too simple. If you are running thousands of VUs, it may not be realistic to start all of those VUs at the same time. Doing so may cause your application to respond in strange ways. Similarly, a hard stop for all VUs at the end of the test may not be realistic either. 112 | 113 | Below is another common load profile that includes a gradual ramp-up period at the beginning, a steady state in the middle, and a ramp-down at the end. 114 | 115 | ![](load_profile-constant.png.png) 116 | _Constant load profile with ramp-up and ramp-down_ 117 | 118 | You can also have more complex load profiles than those shown here. For example, you might want to design a stepped load profile where the number of VUs increases every 30 minutes. You can also define a load profile for every [scenario](https://k6.io/docs/using-k6/scenarios/). 119 | 120 | ## Test your knowledge 121 | 122 | ### Question 1 123 | 124 | What is the purpose of a ramp-up or ramp-down? 125 | 126 | A: It makes the increase or decrease in VUs more gradual, improving how realistic the test script is. 127 | B: It gives the operations team time to prepare for the high load generated by the load test. 128 | C: It is a shakeout period that can be used to spot any issues with the load test script or execution. 129 | D: All of the above. 130 | 131 | ### Question 2 132 | 133 | Which of the following is an example of concurrency? 134 | 135 | A: 300 requests/second 136 | B: Number of account credentials in test data used during the test 137 | C: 300 VUs 138 | D: A or C 139 | 140 | ### Question 3 141 | 142 | Which test parameter best describes a load test exhibiting a gradually increasing load, and then a sudden spike in VUs in the middle of the test? 143 | 144 | A: Concurrency 145 | B: Transaction 146 | C: Load profile -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Performance-automation.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | - Synthetics and Organics 4 | - Why/when to automate for performance 5 | - Types of performance automated tests 6 | - Types of automations in performance 7 | - Non-automated performance tests 8 | 9 | 10 | ### Performance testinf types 11 | 12 | //Define here content from introduction// 13 | 14 | The two main classifications on the mechanisms that trigger these scenarios are Organics and Synthetics. 15 | 16 | 17 | 18 | **_Organic_**_: This category is for the triggering mechanisms done by real-life people. These include developers debugging, QA manual tests, UAT, beta users, production users, and living organisms interacting with the tested system. A critical requirement is to have performance monitors in the points of interest while the organic use occurs._ 19 | 20 | 21 | 22 | **_Synthetic_**_: This category refers to any automated or computer process that triggers the action(s) of interest. Automations mimic real user interaction continuously, single-threaded, or in volumes (load testing.) They generally include performance measurement mechanisms for the simulated actions (mainly response times). Also, the term commonly refers to the recurrent automated triggering of the processes to constantly measure and monitor their status._ 23 | 24 | 25 | 26 | As the goal of this workshop is around the k6 tool, we will focus on synthetics. Although, from the Grafana perspective, organics-monitoring and instrumentation may be valuable material to produce. 27 | 28 | 29 | 30 | ### Frontend performance testing 31 | 32 | Frontend performance testing is concerned with the end user-experience of an application, usually involving a browser. It verifies application performance on the interface level, measuring round-trip metrics that take into account how and when page elements appear on the screen. 33 | 34 | Frontend performance testing provides insights that backend performance testing does not, such as whether pages of the application are optimized to render quickly on a user's screen or how long it takes for a user to be able to interact with the UI elements of the application. Because it primarily measures a single user's experience of the system, frontend performance testing tends to be easier to carry out on a small scale. 35 | 36 | Some disadvantages of this type of performance testing are its dependency on fully integrated environments and cost of scaling. Frontend performance testing can only be done once the application code and infrastructure have been integrated with a user interface, so it begins later in a cycle than does backend performance testing. Tools to automate the frontend are also inherently more resource-intensive, so they can be costly to run at scale and are not suitable for high load tests. 37 | 38 | Frontend performance testing excels at identifying issues on a micro level, but does not expose issues in the underlying architecture of a system. 39 | 40 | ### Backend performance testing 41 | 42 | Backend performance testing targets the underlying application servers to verify the scalability, elasticity, availability, reliability, and responsiveness of a system as a whole. 43 | 44 | - *Scalability*: Can the system adjust to steadily increasing levels of demand? 45 | - *Elasticity*: Can the system conserve resources during periods of lower demand? 46 | - *Availability*: How resilient is the system against outages? 47 | - *Reliability*: Does the system respond consistently in different environmental conditions? 48 | - *Responsiveness*: How quickly does the system process and respond to requests? 49 | 50 | Backend testing is broader in scope than frontend performance testing, and it can be carried out at multiple stages in the development cycle. API testing can be used to target specific components or integrated components, allowing application teams more flexibility and higher chances of finding performance issues earlier. Backend testing is less resource-intensive than frontend performance testing, and is thus more suitable to generating high amounts of load. 51 | 52 | Some disadvantages of this type of testing are its complexity and its inability to test "the first mile" of user experience. Backend testing involves sending messages at the protocol level, so it requires some knowledge of the syntax and content of those messages, as well as of networking principles. Backend performance testing verifies the foundation of an application rather than the highest layer of it that a user ultimately sees. 53 | 54 | Backend performance testing excels at identifying issues in the application code and infrastructure, but does not expose issues in the application interface. 55 | 56 | 57 | 58 | 59 | ## Test your knowledge 60 | 61 | ### Question 1 62 | 63 | Which type of performance testing does k6 excel in? 64 | 65 | A: Backend testing 66 | B: Frontend testing 67 | C: Usability testing 68 | 69 | ### Question 2 70 | 71 | Which of the following is an advantage of backend performance testing? 72 | 73 | A: It provides metrics like Time To Interactive (TTI) that measure when users can first interact with the application. 74 | B: It simulates users by driving real browsers to test the application. 75 | C: It can target application components before they're integrated. 76 | D: A and B. 77 | 78 | ### Question 3 79 | 80 | Which of the following statements is true? 81 | 82 | A: Performance testing is generating high user load or traffic against application servers. 83 | B: Performance testing is an activity that requires specialized expertise to carry out. 84 | C: Performance testing requires a production-like environment. 85 | D: Observability and performance testing are complementary approaches to improving application quality. -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Performance-test-cases.md: -------------------------------------------------------------------------------- 1 | - What is a performance test case 2 | - Types of performance test cases 3 | - Differences with other test cases -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/The-automation-pyramid.md: -------------------------------------------------------------------------------- 1 | - What is the pyramid? 2 | - Past and present (Monolyth, ice-cream, modern) 3 | - Tiers 4 | - Unit 5 | - Service 6 | - Front end 7 | - How tools map on tiers 8 | - How perf tests map on tiers 9 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/The-k6-Cloud-interface.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Using-a-proxy-with-k6.md: -------------------------------------------------------------------------------- 1 | # Using a proxy with k6 2 | 3 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/Using-k6-OSS-with-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | k6 Cloud is an optional service and load testing platform that provides the best experience for running load tests with k6. In this section, you will learn how to integrate k6 OSS with k6 Cloud. 2 | 3 | ## Using k6 OSS with k6 Cloud 4 | 5 | [![Week of Load testing day 4: Streaming results to the cloud and stepped load profiles on k6](../../images/week-of-testing-day4-youtube.png)](https://www.youtube.com/embed/jDmMmc75RRM) 6 | 7 | ### Sign up for k6 Cloud 8 | 9 | k6 Cloud is free (no credit card required) for the first 50 cloud tests, so that's what we're going to use. You will not need a paid account for any of these exercises. 10 | 11 | To start, [register for an account here](https://app.k6.io/account/register). 12 | 13 | Once you're logged in, go to [the API token page](https://app.k6.io/account/api-token) and copy the token there. 14 | 15 | ### Allow k6 OSS to access k6 Cloud 16 | 17 | On your local machine, where you have k6 OSS installed, enter the following command: 18 | 19 | ```plain 20 | k6 login cloud --token 21 | ``` 22 | 23 | This will tell your local installation of k6 OSS to use your k6 Cloud account, unlocking additional features. When you use `k6 run test.js`, you'll still be starting tests locally, not on k6 Cloud, but there are other execution options available to you. 24 | 25 | ## Send output to cloud 26 | 27 | The first way to integrate k6 OSS with k6 Cloud is to run your test locally, but send your results to k6 Cloud. The main advantage of this is to have all your test runs in one place, even shakeout or debugging test runs. 28 | 29 | To do this, use the following command: 30 | 31 | ```plain 32 | k6 run test.js -o cloud 33 | ``` 34 | 35 | The `-o` is the same flag we used to output results to different files, but this time, k6 is instead sending the results to k6 Cloud. You'll then see the test running on k6 Cloud - more on how to view those results in k6 Cloud in the next section! 36 | 37 | ## Run cloud test 38 | 39 | Another option available once you've linked your k6 Cloud with k6 OSS is remotely starting an execution on k6 Cloud. 40 | 41 | Try this: 42 | 43 | ```plain 44 | k6 cloud test.js 45 | ``` 46 | 47 | You'll see output like this: 48 | 49 | ```plain 50 | nic@sopirulino k6-scripts % k6 cloud test.js 51 | 52 | /\ |‾‾| /‾‾/ /‾‾/ 53 | /\ / \ | |/ / / / 54 | / \/ \ | ( / ‾‾\ 55 | / \ | |\ \ | (‾) | 56 | / __________ \ |__| \__\ \_____/ .io 57 | 58 | execution: cloud 59 | script: test.js 60 | output: https://app.k6.io/runs/1219824 61 | 62 | scenarios: (100.00%) 1 scenario, 100 max VUs, 4m30s max duration (incl. graceful stop): 63 | * default: Up to 100 looping VUs for 4m0s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s) 64 | 65 | 66 | Run [--------------------------------------] Initializing 67 | ``` 68 | 69 | Notice the line `execution: cloud`, indicating that while the test was triggered via k6 OSS, the test is actually being executed on k6 Cloud. You'll also see a progress bar to let you know what's happening, along with a [test status code](https://k6.io/docs/cloud/cloud-faq/general-questions/#test-status-codes). 70 | 71 | In this situation, k6 will: 72 | - send test artifacts (your test script and test data, if any) to k6 Cloud 73 | - provision instances of k6 in our AWS account 74 | - copy over your test artifacts to each AWS instance 75 | - start the test, sending results back to k6 Cloud 76 | 77 | ### `-o cloud` or `k6 cloud`? 78 | 79 | While both commands utilize an integration between k6 OSS and k6 Cloud, they do fundamentally different things. 80 | 81 | `k6 run test.js -o cloud` instructs k6 to run **locally**, but send test results **remotely** to k6 Cloud. 82 | 83 | `k6 cloud test.js` instructs k6 to run and store results **remotely** to k6 Cloud. 84 | 85 | These commands let you easily switch between execution modes depending on the type of test you're doing. 86 | 87 | - Use `k6 run test.js` while you're writing or debugging a script. 88 | - Use `k6 run test.js -o cloud` when you're ready for shakeout tests, or longer tests that you want to run on your own infrastructure. 89 | - Use `k6 cloud test.js` when you want to utilize k6 Cloud infrastructure to run your test, especially if you'd like to ramp up to full-scale tests. 90 | 91 | ## Test your knowledge 92 | 93 | ### Question 1 94 | 95 | Which of the following do you need to be able to link your k6 Cloud account for use in k6 OSS? 96 | 97 | A: The receipt number for your purchase of a k6 Cloud plan 98 | 99 | B: Your k6 Cloud account number 100 | 101 | C: Your API token 102 | 103 | ### Question 2 104 | 105 | What can you do for free on k6 Cloud? 106 | 107 | A: Nothing; using k6 Cloud requires a payment plan 108 | 109 | B: Run a test on 50 cloud load generators 110 | 111 | C: Run 50 tests on cloud infrastructure 112 | 113 | ### Question 3 114 | 115 | Which of the following commands would NOT send k6 test results to k6 Cloud? 116 | 117 | A: `k6 run script.js` 118 | 119 | B: `k6 cloud script.js` 120 | 121 | C: `k6 run script.js -o cloud` 122 | 123 | ### Answers 124 | 125 | 1. C. 126 | 2. C. 127 | 3. A. -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/What-is-continuous-load-testing-and-why-should-you-do-it.md: -------------------------------------------------------------------------------- 1 | 2 | # What is continuous load testing and why should you do it? 3 | 4 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/k6-OSS-vs-k6-Cloud.md: -------------------------------------------------------------------------------- 1 | So you're ready to create your first k6 load testing script! Before you do that, let's talk about your options. 2 | 3 | 4 | _Video overview of what k6 is and the differences between k6 OSS and k6 Cloud_ 5 | 6 | ## k6 Open Source Software (OSS) vs k6 Cloud 7 | 8 | There are two ways for you to start creating your test script from scratch: k6 OSS and k6 Cloud. k6 OSS is a free, open-source tool for generating load. k6 Cloud is a Software as a Service (SaaS) platform that uses the same tool, k6 OSS, but also adds features such as on-demand cloud infrastructure, real-time dashboards, and integrations. You can view a more detailed comparison of k6 OSS and k6 Cloud [on this page](https://k6.io/oss-vs-cloud/). 9 | 10 | k6 OSS is better for you in these situations: 11 | - You want to try out k6 and learn how to use it. 12 | - You want to generate load locally, or within a private network. 13 | - You don't need distributed execution, or you're willing to set up the [k6 operator](https://github.com/grafana/k6-operator) to run distributed tests. 14 | - You don't mind scripting in JavaScript. 15 | 16 | k6 Cloud is better for you in these situations: 17 | - You want the most friction-free experience in running a load test (no coding required). 18 | - You need distributed execution, either due to number of users or requirements for loadge generated from various geographical regions. 19 | - You want real-time dashboards showing load testing results as the test is running. 20 | - You want to share k6 results with your team. 21 | 22 | In this workshop, you'll learn how to use both k6 OSS and k6 Cloud. You will get started with your first k6 script on k6 OSS, but you'll also learn how to incorporate k6 Cloud later on. 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/k6-Use-Cases.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### Microservices testing 4 | 5 | ### Benchmarking WordPress hosting providers 6 | 7 | 8 | 9 | ### Emergency services 10 | 11 | 12 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/k6-and-Netdata.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /Modules/XX-Future-Ideas/k6-and-New-Relic.md: -------------------------------------------------------------------------------- 1 | # k6 and New Relic 2 | 3 | 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Welcome to k6-Learn 2 | 3 | This repo contains resources for: 4 | - creating slide presentations 5 | - giving a workshop 6 | - speaking about k6 7 | - learning about k6 8 | 9 | ## Other getting-started resources 10 | 11 | You can also use the following resources to learn or teach k6. You copy and modify these projects to your use case and audience. They are intended to be a jumping off point, so you can remove content that wouldn't appeal to your audience or add new ones to show off your style more. 12 | 13 | - [k6 starter slide deck](https://docs.google.com/presentation/d/1gviRg7RTzT0Y2_5WPBADyn5xpa96PIqWivGAThNW6pM/edit?usp=sharing) 14 | - [k6 OSS Workshop](https://github.com/grafana/k6-oss-workshop) 15 | 16 | ## What if I want something more hands on? 17 | 18 | Consider running a workshop for k6. Below is an outline of what that workshop could look like, as well as modules you could use for each topic. Feel free to take these and include the parts most relevant to you! 19 | 20 | We have also created built-in slides for you, which you are free to edit. To customize the slides, please fork this repo and make edits according to how you want the workshop to be structured. 21 | 22 | The slides are created using [reveal.js](https://revealjs.com/) and are all found in the [slides](./slides/) folder. 23 | 24 | ### Running the slides 25 | 26 | In your terminal, run the following command: 27 | 28 | ``` 29 | npm install 30 | npm run slides 31 | ``` 32 | 33 | ### I: Performance testing principles 34 | 35 | - [Introduction to Performance Testing](Modules/I-Performance-testing-principles/01-Introduction-to-Performance-Testing.md) 36 | - [Frontend vs. backend performance testing](Modules/I-Performance-testing-principles/02-Frontend-vs-backend-performance-testing.md) 37 | - [Load testing](Modules/I-Performance-testing-principles/03-Load-Testing.md) 38 | - [High-level overview of the load testing process](Modules/I-Performance-testing-principles/04-High-level-overview-of-the-load-testing-process.md) 39 | 40 | ### II: k6 Foundations 41 | 42 | - [Getting started with k6 OSS](Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md) 43 | - [The k6 CLI](Modules/II-k6-Foundations/02-The-k6-CLI.md) 44 | - [Understanding k6 results](Modules/II-k6-Foundations/03-Understanding-k6-results.md) 45 | - [Adding checks to your script](Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md) 46 | - [Adding think time using sleep](Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md) 47 | - [k6 Load Test Options](Modules/II-k6-Foundations/06-k6-Load-Test-Options.md) 48 | - [Setting test criteria with thresholds](Modules/II-k6-Foundations/07-Setting-test-criteria-with-thresholds.md) 49 | - [k6 results output options](Modules/II-k6-Foundations/08-k6-results-output-options.md) 50 | - [Recording a k6 script](Modules/II-k6-Foundations/09-Recording-a-k6-script.md) 51 | 52 | ### III: k6 Intermediate 53 | 54 | - [How to debug k6 load testing scripts](Modules/III-k6-Intermediate/01-How-to-debug-k6-load-testing-scripts.md) 55 | - [Dynamic correlation in k6](Modules/III-k6-Intermediate/02-Dynamic-correlation-in-k6.md) 56 | - [Workload modeling](Modules/III-k6-Intermediate/03-Workload-modeling.md) 57 | - [Adding test data](Modules/III-k6-Intermediate/04-Adding-test-data.md) 58 | - [Parallel requests in k6](Modules/III-k6-Intermediate/05-Parallel-requests-in-k6.md) 59 | - [Organizing code in k6 by transaction - groups and tags](Modules/III-k6-Intermediate/06-Organizing-code-in-k6-by-transaction_groups-and-tags.md) 60 | - [Setup and Teardown functions](Modules/III-k6-Intermediate/07-Setup-and-Teardown-functions.md) 61 | - [Setting load profiles with executors](Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors.md) 62 | - [Workload modeling with scenarios](Modules/III-k6-Intermediate/09-Workload-modeling-with-scenarios.md) 63 | - [Using execution context variables](Modules/III-k6-Intermediate/10-Using-execution-context-variables.md) 64 | - [Creating and using custom metrics](Modules/III-k6-Intermediate/11-Creating-and-using-custom-metrics.md) 65 | 66 | ## Contributors 67 | 68 | k6-learn would not be possible without these amazing contributors! 🌟 69 | 70 | - [Imma Valls](https://github.com/immavalls) 71 | - [Krzysztof Widera](https://github.com/kwidera) 72 | - [Leandro Melendez](https://github.com/srperf) 73 | - [Marie Cruz](https://github.com/mdcruz) 74 | - [Matt Dodson](https://github.com/MattDodsonEnglish) 75 | - [Nicole van der Hoeven](https://github.com/nicolevanderhoeven) 76 | - [Paul Balogh](https://github.com/javaducky) 77 | 78 | ## How to contribute? 79 | 80 | 1. If there an issue does not exist, start by creating it under https://github.com/grafana/k6-learn/issues. 81 | 2. [Fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks) this [repository](https://github.com/grafana/k6-learn). 82 | 3. Make the changes in your forked repository. Note that new branches, changes and pushes will only affect your repository. 83 | 4. Once ready, create a [Pull Request from your fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), mentioning the issue it solves (step 1). Keep in mind you might need to [sync it](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork). 84 | 5. Once you have created the Pull Request, your changes now will be reviewed and either accepted or you will be asked for modifications. 85 | For more information about forking visit the page: https://docs.github.com/en/get-started/quickstart/fork-a-repo 86 | -------------------------------------------------------------------------------- /Suggested Formats.md: -------------------------------------------------------------------------------- 1 | # Suggested Formats 2 | 3 | ## Workshops 4 | 5 | ### Beginner workshop (up to 120 mins) 6 | 7 | - See [workshop intro] for a sample format (./modules/intro/slides.md) 8 | 9 | ### One day 10 | 11 | - [Introduction to Performance Testing](Modules/I-Performance-testing-principles/01-Introduction-to-Performance-Testing.md) 12 | - [Load Testing](Modules/I-Performance-testing-principles/03-Load-Testing.md) vs performance testing 13 | - [High-level overview of the load testing process](Modules/I-Performance-testing-principles/04-High-level-overview-of-the-load-testing-process.md) 14 | - [k6 OSS vs k6 Cloud](Modules/XX-Future-Ideas/k6-OSS-vs-k6-Cloud.md) 15 | - [Getting started with k6 OSS](Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md) 16 | - [Understanding k6 results](Modules/II-k6-Foundations/03-Understanding-k6-results.md) 17 | - [Adding checks to your script](Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md) 18 | - [Adding think time using sleep](Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md) 19 | - [k6 Load Test Options](Modules/II-k6-Foundations/06-k6-Load-Test-Options.md) 20 | - [Setting test criteria with thresholds](Modules/II-k6-Foundations/07-Setting-test-criteria-with-thresholds.md) 21 | - [k6 results output options](Modules/II-k6-Foundations/08-k6-results-output-options.md) 22 | - [Best practices for designing realistic k6 scripts](Modules/XX-Future-Ideas/Best-practices-for-designing-realistic-k6-scripts.md) 23 | - [Setting load profiles with executors](Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors.md) 24 | - [Workload modeling with scenarios](Modules/III-k6-Intermediate/09-Workload-modeling-with-scenarios.md) 25 | - [Integrating k6 with Grafana and Prometheus](Modules/XX-Future-Ideas/Integrating-k6-with-Grafana-and-Prometheus.md) 26 | 27 | ### Three-day 28 | 29 | #### Day 1 30 | 31 | 32 | 33 | #### Day 2 34 | 35 | #### Day 3 36 | 37 | ## Presentations 38 | 39 | ### 30 minutes 40 | 41 | - [Getting started with k6 OSS](Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md) 42 | - [Adding checks to your script](Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md) 43 | - [Adding think time using sleep](Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md) 44 | - [k6 Load Test Options](Modules/II-k6-Foundations/06-k6-Load-Test-Options.md) 45 | - [Setting test criteria with thresholds](Modules/II-k6-Foundations/07-Setting-test-criteria-with-thresholds.md) 46 | 47 | ### 1 hour 48 | 49 | - [Introduction to Performance Testing](Modules/I-Performance-testing-principles/01-Introduction-to-Performance-Testing.md) 50 | - [High-level overview of the load testing process](Modules/I-Performance-testing-principles/04-High-level-overview-of-the-load-testing-process.md) 51 | - [Getting started with k6 OSS](Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md) 52 | - [Understanding k6 results](Modules/II-k6-Foundations/03-Understanding-k6-results.md) 53 | - [Adding checks to your script](Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md) 54 | - [Adding think time using sleep](Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md) 55 | - [k6 Load Test Options](Modules/II-k6-Foundations/06-k6-Load-Test-Options.md) 56 | - [Setting test criteria with thresholds](Modules/II-k6-Foundations/07-Setting-test-criteria-with-thresholds.md) 57 | - [k6 results output options](Modules/II-k6-Foundations/08-k6-results-output-options.md) 58 | - [Integrating k6 with Grafana and Prometheus](Modules/XX-Future-Ideas/Integrating-k6-with-Grafana-and-Prometheus.md) 59 | 60 | ## Video course 61 | 62 | ### Theory 63 | - [Introduction to Performance Testing](Modules/I-Performance-testing-principles/01-Introduction-to-Performance-Testing.md) 64 | - [Load testing](Modules/I-Performance-testing-principles/03-Load-Testing.md) 65 | - [Types of load tests](Modules/Types-of-load-tests.md) 66 | - [High-level overview of the load testing process](Modules/I-Performance-testing-principles/04-High-level-overview-of-the-load-testing-process.md) 67 | - [Workload modeling](Modules/III-k6-Intermediate/03-Workload-modeling.md) 68 | 69 | ### k6 Foundations 70 | - [Getting started with k6 OSS](Modules/II-k6-Foundations/01-Getting-started-with-k6-OSS.md) 71 | - [The k6 CLI](Modules/II-k6-Foundations/02-The-k6-CLI.md) 72 | - [Understanding k6 results](Modules/II-k6-Foundations/03-Understanding-k6-results.md) 73 | - [Adding checks to your script](Modules/II-k6-Foundations/04-Adding-checks-to-your-script.md) 74 | - [Adding think time using sleep](Modules/II-k6-Foundations/05-Adding-think-time-using-sleep.md) 75 | - [k6 Load Test Options](Modules/II-k6-Foundations/06-k6-Load-Test-Options.md) 76 | - [Setting test criteria with thresholds](Modules/II-k6-Foundations/07-Setting-test-criteria-with-thresholds.md) 77 | - [k6 results output options](Modules/II-k6-Foundations/08-k6-results-output-options.md) 78 | - [Recording a k6 script](Modules/II-k6-Foundations/09-Recording-a-k6-script.md) 79 | 80 | ### k6 Intermediate 81 | 82 | - [How to debug k6 load testing scripts](Modules/III-k6-Intermediate/01-How-to-debug-k6-load-testing-scripts.md) 83 | - [Dynamic correlation in k6](Modules/III-k6-Intermediate/02-Dynamic-correlation-in-k6.md) 84 | - [Adding test data](Modules/III-k6-Intermediate/04-Adding-test-data.md) 85 | - [Parallel requests in k6](Modules/III-k6-Intermediate/05-Parallel-requests-in-k6.md) 86 | - [Organizing code in k6 by transaction - groups and tags](Modules/Organizing-code-in-k6-by-transaction---groups-and-tags.md) 87 | - [Setup and Teardown functions](Modules/III-k6-Intermediate/07-Setup-and-Teardown-functions.md) 88 | - [Best practices for designing realistic k6 scripts](Modules/XX-Future-Ideas/Best-practices-for-designing-realistic-k6-scripts.md) 89 | - [Setting load profiles with executors](Modules/III-k6-Intermediate/08-Setting-load-profiles-with-executors.md) 90 | - [Workload modeling with scenarios](Modules/III-k6-Intermediate/09-Workload-modeling-with-scenarios.md) 91 | - [Creating and using custom metrics](Modules/III-k6-Intermediate/11-Creating-and-using-custom-metrics.md) 92 | - [Environment variables](Modules/Environment-variables.md) 93 | - [Using execution context variables](Modules/III-k6-Intermediate/10-Using-execution-context-variables.md) 94 | 95 | -------------------------------------------------------------------------------- /images/52qQi-marketing-strategies-app-and-mobile-page-load-time-statistics-downlo.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/52qQi-marketing-strategies-app-and-mobile-page-load-time-statistics-downlo.jpg -------------------------------------------------------------------------------- /images/LoadPartOfPerf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/LoadPartOfPerf.png -------------------------------------------------------------------------------- /images/MvsM.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/MvsM.jpg -------------------------------------------------------------------------------- /images/WvsA.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/WvsA.png -------------------------------------------------------------------------------- /images/backend-component.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/backend-component.png -------------------------------------------------------------------------------- /images/bert-relaxing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/bert-relaxing.png -------------------------------------------------------------------------------- /images/browser-extension-icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/browser-extension-icon.png -------------------------------------------------------------------------------- /images/continuous-testing-snowball.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/continuous-testing-snowball.png -------------------------------------------------------------------------------- /images/definitions-performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/definitions-performance.png -------------------------------------------------------------------------------- /images/definitions-test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/definitions-test.png -------------------------------------------------------------------------------- /images/frontend-backend.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/frontend-backend.png -------------------------------------------------------------------------------- /images/frontend-limitations.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/frontend-limitations.png -------------------------------------------------------------------------------- /images/frontend-performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/frontend-performance.png -------------------------------------------------------------------------------- /images/grafana-cloud-k6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/grafana-cloud-k6.png -------------------------------------------------------------------------------- /images/installation-page.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/installation-page.png -------------------------------------------------------------------------------- /images/k6-browser-recorder-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-browser-recorder-01.png -------------------------------------------------------------------------------- /images/k6-browser-recorder-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-browser-recorder-02.png -------------------------------------------------------------------------------- /images/k6-browser-recorder-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-browser-recorder-3.png -------------------------------------------------------------------------------- /images/k6-browser-recorder-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-browser-recorder-4.png -------------------------------------------------------------------------------- /images/k6-browser-recorder-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-browser-recorder-5.png -------------------------------------------------------------------------------- /images/k6-cloud-script-editor-from-recording.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-cloud-script-editor-from-recording.png -------------------------------------------------------------------------------- /images/k6-cloud-script-editor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-cloud-script-editor.png -------------------------------------------------------------------------------- /images/k6-end-of-summary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-end-of-summary.png -------------------------------------------------------------------------------- /images/k6-order-of-preference-settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-order-of-preference-settings.png -------------------------------------------------------------------------------- /images/k6-output-options.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/k6-output-options.png -------------------------------------------------------------------------------- /images/load_profile-constant.png.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-constant.png.png -------------------------------------------------------------------------------- /images/load_profile-no_ramp-up_or_ramp-down 1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-no_ramp-up_or_ramp-down 1.png -------------------------------------------------------------------------------- /images/load_profile-no_ramp-up_or_ramp-down.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-no_ramp-up_or_ramp-down.png -------------------------------------------------------------------------------- /images/load_profile-no_ramp-up_or_ramp-down.png.orig: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-no_ramp-up_or_ramp-down.png.orig -------------------------------------------------------------------------------- /images/load_profile-ramp-up.png.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-ramp-up.png.png -------------------------------------------------------------------------------- /images/load_profile-steady_state.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/load_profile-steady_state.png -------------------------------------------------------------------------------- /images/office-hours-executors-k6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/office-hours-executors-k6.png -------------------------------------------------------------------------------- /images/parallel-requests.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/parallel-requests.png -------------------------------------------------------------------------------- /images/scenarios-shakeout.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/scenarios-shakeout.png -------------------------------------------------------------------------------- /images/test-scenario-average.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/test-scenario-average.png -------------------------------------------------------------------------------- /images/test-scenario-soak.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/test-scenario-soak.png -------------------------------------------------------------------------------- /images/test-scenario-spike-test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/test-scenario-spike-test.png -------------------------------------------------------------------------------- /images/test-scenario-stress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/test-scenario-stress.png -------------------------------------------------------------------------------- /images/test-scenarios-breakpoint.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/test-scenarios-breakpoint.png -------------------------------------------------------------------------------- /images/week-of-testing-day4-youtube.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/week-of-testing-day4-youtube.png -------------------------------------------------------------------------------- /images/week-of-testing-youtube.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/grafana/k6-learn/74961d462fb915da659661fdc41edc9d9e845bb9/images/week-of-testing-youtube.png -------------------------------------------------------------------------------- /index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | k6 Learn 7 | 8 | 9 |
10 |
11 |
12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "k6-learn", 3 | "private": true, 4 | "version": "0.0.0", 5 | "type": "module", 6 | "scripts": { 7 | "slides": "vite" 8 | }, 9 | "devDependencies": { 10 | "reveal.js": "^4.5.0", 11 | "vite": "^4.4.5" 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /reveal.js: -------------------------------------------------------------------------------- 1 | import 'reveal.js/dist/reveal.css' 2 | import 'reveal.js/dist/theme/dracula.css' 3 | 4 | import Reveal from 'reveal.js' 5 | import Markdown from 'reveal.js/plugin/markdown/markdown.esm.js' 6 | import RevealHighlight from 'reveal.js/plugin/highlight/highlight.esm.js' 7 | 8 | const url = new URL(document.URL) 9 | const slidesFolder = url.searchParams.get('p') || 'intro' 10 | const markdownFilename = './slides/' + slidesFolder + '/slides.md' 11 | 12 | console.log(markdownFilename) 13 | 14 | fetch(markdownFilename) 15 | .then((r) => r.text()) 16 | .then((markdownContents) => { 17 | 18 | document.querySelector('.slides').innerHTML = 19 | '
' 22 | 23 | const deck = new Reveal({ 24 | plugins: [Markdown, RevealHighlight] 25 | }) 26 | 27 | deck.initialize({ 28 | width: 1280, 29 | height: 720, 30 | slideNumber: true, 31 | embedded: false, 32 | }) 33 | }) -------------------------------------------------------------------------------- /slides/01-introduction-to-performance-testing/slides.md: -------------------------------------------------------------------------------- 1 | ## What is performance testing? 2 | 3 | It measures qualitative aspects of a user's experience of a system, such as its responsiveness and reliability. 4 | 5 | --- 6 | 7 | ## Why should we do performance testing? 8 | 9 | --- 10 | 11 | - **Improve user experience.** 12 | - **Prepare for unexpected demand.** 13 | - **Increase confidence in the application.** 14 | - **Assess and optimize infrastructure.** 15 | 16 | --- 17 | 18 | ## Common excuses for not performance testing 19 | 20 | --- 21 | 22 | - **Our application is too small** 23 | - **It's expensive or time-consuming** 24 | - **It requires extensive technical knowledge** 25 | 26 | --- 27 | 28 | - **We don't have a performance environment** 29 | - **Observability trumps performance testing** 30 | - **The cloud is infinite; we can always scale up** 31 | 32 | --- 33 | 34 | ## Frontend vs backend performance testing 35 | 36 | - Move to: [02-frontend-vs-backend-performance-testing](?p=02-frontend-vs-backend-performance-testing) 37 | 38 | 39 | 40 | -------------------------------------------------------------------------------- /slides/02-frontend-vs-backend-performance-testing/slides.md: -------------------------------------------------------------------------------- 1 | 2 | 11 | 12 | ## Frontend performance testing 13 | 14 | ![An example website with the title crocs r cool](../../images/frontend-performance.png) 15 | 16 | 17 | --- 18 | 19 |
20 |
21 | 22 | ### Why isn't frontend performance testing enough? 23 | 24 | - Frontend performance does not look under the hood 25 | - Without load, it does not test under traffic conditions 26 | - Browser-level load testing is resource intensive and costly 27 | 28 |
29 | 30 |
31 | 32 | ![A graphic showing that browser load testing is costly and resource intensive](../../images/frontend-limitations.png) 33 | 34 |
35 |
36 | 37 | --- 38 | 39 | ![A chart aggregating front and backend response times. As concurrency increases, backend response time becomes much longer.](../../images/frontend-backend.png) 40 | 41 | --- 42 | 43 | ## Backend performance testing 44 | 45 | ![A component of a backend system](../../images/backend-component.png) 46 | 47 | 48 | --- 49 | 50 | ## What does backend performance verifies? 51 | 52 | - **Scalability** 53 | - **Elasticity** 54 | - **Availability** 55 | - **Reliability** 56 | - **Resiliency** 57 | - **Latency** 58 | 59 | --- 60 | 61 |
62 |
63 | 64 | ### Why isn't backend performance testing enough? 65 | 66 | - Ignores user experience 67 | - Scripts can get lengthy to create 68 | - More difficult to maintain 69 | 70 |
71 | 72 |
73 | 74 | ![A graphic showing that browser load testing is costly and resource intensive](../../images/frontend-limitations.png) 75 | 76 |
77 |
78 | 79 | --- 80 | 81 | ## Load testing 82 | 83 | - Move to: [03-load-testing](?p=03-load-testing) 84 | 85 | -------------------------------------------------------------------------------- /slides/03-load-testing/slides.md: -------------------------------------------------------------------------------- 1 | 10 | 11 | ## Load Testing 12 | 13 | **Performance testing != Load testing** 14 | 15 | --- 16 | 17 | ## Common test parameters 18 | 19 | _Test parameters_ include the distribution, shape, and pattern of the load. 20 | 21 | - **Virtual users (VUs)** 22 | - **Iterations** 23 | - **Throughput** 24 | - **User flows** 25 | - **Load profile** 26 | - **Duration** 27 | 28 | --- 29 | 30 | ## How to simulate load 31 | 32 | 1. **Protocol-based load testing** 33 | 2. **Browser-based load testing** 34 | 3. **Hybrid load testing** 35 | 36 | --- 37 | 38 | ## Load test types 39 | 40 | ### Shakeout test 41 | 42 | ![](../../images/scenarios-shakeout.png) 43 | 44 | 45 | --- 46 | 47 | ### Average load test 48 | 49 | ![](../../images/test-scenario-average.png) 50 | 51 | 52 | --- 53 | 54 | ### Stress test 55 | 56 | ![](../../images/test-scenario-stress.png) 57 | 58 | 59 | --- 60 | 61 | ### Soak or endurance test 62 | 63 | ![](../../images/test-scenario-soak.png) 64 | 65 | 66 | --- 67 | 68 | ### Spike test 69 | 70 | ![](../../images/test-scenario-spike-test.png) 71 | 72 | 73 | --- 74 | 75 | ### Breakpoint test 76 | 77 | ![](../../images/test-scenarios-breakpoint.png) 78 | 79 | 80 | --- 81 | 82 | ## Load testing process 83 | 84 |
85 |
86 | 87 | ### High level overview 88 | 89 | - Planning for load testing 90 | - Scripting a load test 91 | - Executing load tests 92 | - Analysis of load testing results 93 | 94 |
95 | 96 |
97 | 98 | ![Continuous Testing Snowball](../../images/continuous-testing-snowball.png) 99 | 100 |
101 |
102 | 103 | --- 104 | 105 | ## k6 OSS 106 | 107 | - Move to: [04-getting-started-with-k6-oss](?p=04-getting-started-with-k6-oss) 108 | -------------------------------------------------------------------------------- /slides/04-getting-started-with-k6-oss/slides.md: -------------------------------------------------------------------------------- 1 | # Get started with k6 OSS 2 | 3 | --- 4 | 5 | ## Installation 6 | 7 | https://k6.io/docs/getting-started/installation/ 8 | 9 | ![k6 installation page](../../images/installation-page.png) 10 | 11 | 12 | --- 13 | 14 | ## Writing your first k6 script 15 | 16 | - Create a new file named `test.js`, and open it in your favorite IDE. 17 | - Import the HTTP Client from the built-in module `k6/http`: 18 | - Create and export a default function. 19 | 20 | ```js [1|3-6] 21 | import http from 'k6/http'; 22 | 23 | export default function () { 24 | // Any code in the `default` function 25 | // is executed by each k6 virtual user when the test runs. 26 | } 27 | ``` 28 | 29 | --- 30 | 31 | ## Writing your first k6 script 32 | 33 | Add the logic for making the actual HTTP call: 34 | 35 | ```js 36 | import http from 'k6/http'; 37 | 38 | export default function() { 39 | let url = 'https://httpbin.test.k6.io/post'; 40 | let response = http.post(url, 'Hello world!'); 41 | 42 | console.log(response.json().data); 43 | } 44 | ``` 45 | 46 | --- 47 | 48 | ## Writing your first k6 script 49 | 50 | Alternatively, you can also run `k6 new [filename]` to automatically create a file with the basic boilerplate to get you up and running quickly. 😉 51 | 52 | --- 53 | 54 | ## Hello World: running your k6 script 55 | 56 | Run the command below on a terminal: 57 | 58 | ```js 59 | k6 run test.js 60 | ``` 61 | 62 | --- 63 | 64 | You should get something like this: 65 | 66 | ```shell 67 | $ k6 run test.js 68 | 69 | /\ |‾‾| /‾‾/ /‾‾/ 70 | /\ / \ | |/ / / / 71 | / \/ \ | ( / ‾‾\ 72 | / \ | |\ \ | (‾) | 73 | / __________ \ |__| \__\ \_____/ .io 74 | 75 | execution: local 76 | script: test.js 77 | output: - 78 | 79 | scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop): 80 | * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s) 81 | 82 | INFO[0001] Hello world! source=console 83 | 84 | running (00m00.7s), 0/1 VUs, 1 complete and 0 interrupted iterations 85 | default ✓ [======================================] 1 VUs 00m00.7s/10m0s 1/1 iters, 1 per VU 86 | 87 | data_received..................: 5.9 kB 9.0 kB/s 88 | data_sent......................: 564 B 860 B/s 89 | http_req_blocked...............: avg=524.18ms min=524.18ms med=524.18ms max=524.18ms p(90)=524.18ms p(95)=524.18ms 90 | http_req_connecting............: avg=123.28ms min=123.28ms med=123.28ms max=123.28ms p(90)=123.28ms p(95)=123.28ms 91 | http_req_duration..............: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 92 | { expected_response:true }...: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 93 | http_req_failed................: 0.00% ✓ 0 ✗ 1 94 | http_req_receiving.............: avg=165µs min=165µs med=165µs max=165µs p(90)=165µs p(95)=165µs 95 | http_req_sending...............: avg=80µs min=80µs med=80µs max=80µs p(90)=80µs p(95)=80µs 96 | http_req_tls_handshaking.......: avg=399.48ms min=399.48ms med=399.48ms max=399.48ms p(90)=399.48ms p(95)=399.48ms 97 | http_req_waiting...............: avg=129.94ms min=129.94ms med=129.94ms max=129.94ms p(90)=129.94ms p(95)=129.94ms 98 | http_reqs......................: 1 1.525116/s 99 | iteration_duration.............: avg=654.72ms min=654.72ms med=654.72ms max=654.72ms p(90)=654.72ms p(95)=654.72ms 100 | iterations.....................: 1 1.525116/s 101 | 102 | ``` 103 | 104 | --- 105 | 106 | ## k6 CLI 107 | 108 | - Move to: [05-the-k6-cli](?p=05-the-k6-cli) 109 | -------------------------------------------------------------------------------- /slides/05-the-k6-cli/slides.md: -------------------------------------------------------------------------------- 1 | # The k6 CLI 2 | 3 | --- 4 | 5 | ## Commands 6 | 7 | The three most common commands are: 8 | 9 | | Command | Description | Usage | 10 | | --------- | ------------------------------ | ---------------- | 11 | | `help` | Displays all possible commands | `k6 help` | 12 | | `run` | Executes a k6 script | `k6 run test.js` | 13 | | `version` | Displays installed k6 version | `k6 version` | 14 | 15 | --- 16 | 17 | ## Flags 18 | 19 | | Flag | Description | Usage | 20 | | ---------------------- | -------------------------------------------------------------- | ---------------------------------------- | 21 | | `--help` | Displays possible flags for the given command | `k6 run --help` | 22 | | `--vus` or `-u` | Sets number of virtual users | `k6 run test.js --vus 10 --duration 30s` | 23 | | `--duration` | Sets the duration of the test | `k6 run test.js --duration 10m` | 24 | 25 | --- 26 | 27 | | Flag | Description | Usage | 28 | | ---------------------- | -------------------------------------------------------------- | ---------------------------------------- | 29 | | `--iterations` or `-i` | Instructs k6 to iterate the default function a number of times | `k6 run test.js -i 3` | 30 | | `-e` | Sets an environment variable to pass to the script | `k6 run test.js -e DOMAIN=test.k6.io` 31 | 32 | --- 33 | 34 | ## Getting help: `help` 35 | 36 | Running `k6 help` returns the following: 37 | 38 | ```shell 39 | /\ |‾‾| /‾‾/ /‾‾/ 40 | /\ / \ | |/ / / / 41 | / \/ \ | ( / ‾‾\ 42 | / \ | |\ \ | (‾) | 43 | / __________ \ |__| \__\ \_____/ .io 44 | 45 | Usage: 46 | k6 [command] 47 | 48 | Available Commands: 49 | archive Create an archive 50 | cloud Run a test on the cloud 51 | convert Convert a HAR file to a k6 script 52 | help Help about any command 53 | inspect Inspect a script or archive 54 | login Authenticate with a service 55 | pause Pause a running test 56 | resume Resume a paused test 57 | run Start a load test 58 | scale Scale a running test 59 | stats Show test metrics 60 | status Show test status 61 | version Show application version 62 | 63 | Flags: 64 | -a, --address string address for the api server (default "localhost:6565") 65 | -c, --config string JSON config file (default "/Users/nic/Library/Application Support/loadimpact/k6/config.json") 66 | -h, --help help for k6 67 | --log-output string change the output for k6 logs, possible values are stderr,stdout,none,loki[=host:port] (default "stderr") 68 | --log-format string log output format 69 | --no-color disable colored output 70 | -q, --quiet disable progress updates 71 | -v, --verbose enable verbose logging 72 | 73 | Use "k6 [command] --help" for more information about a command. 74 | ``` 75 | 76 | --- 77 | 78 | ## Execution and Execution Options: `run` 79 | 80 | Another common k6 command is `k6 run [filename].js`. 81 | 82 | --- 83 | 84 | ### The duration flag 85 | 86 | The duration specifies how long the test executes for. You can set this on the command line with the flag `--duration`: 87 | 88 | ```shell 89 | k6 run test.js --duration 30s 90 | ``` 91 | 92 | You can use `s`, `h`, and `m` to define the duration. 93 | 94 | --- 95 | 96 | ### The iterations flag 97 | 98 | You can set the number of iterations with the `--iterations` or `-i` flag, like this: 99 | 100 | ```shell 101 | k6 run test.js --iterations 100 102 | k6 run test.js -i 100 103 | ``` 104 | 105 | --- 106 | 107 | ### Virtual user flags 108 | 109 | You can adjust the number of virtual users with the `-u` or `--vus`s flag when running the test: 110 | 111 | ```shell 112 | k6 run test.js --vus 10 --duration 1m 113 | k6 run test.js -u 10 --iterations 100 114 | ``` 115 | 116 | --- 117 | 118 | ### Environment variables 119 | 120 | To use an environment variable, define the variable in your script: 121 | 122 | ```js [3|5-7] 123 | import http from 'k6/http'; 124 | 125 | const hostname = `http://${__ENV.DOMAIN}`; 126 | 127 | export default function () { 128 | let res = http.get(hostname + '/my_messages.php'); 129 | } 130 | ``` 131 | 132 | --- 133 | 134 | Here's how to do define it during runtime: 135 | 136 | ```shell 137 | k6 run test.js -e DOMAIN=test.k6.io 138 | ``` 139 | 140 | --- 141 | 142 | ## Changing settings in k6 143 | 144 | k6 always [prioritizes settings](https://k6.io/docs/using-k6/k6-options/how-to/#order-of-precedence) in this order: 145 | 146 | ![Priority of configurations and settings in k6](../../images/k6-order-of-preference-settings.png) 147 | 148 | 149 | --- 150 | 151 | ## Understanding k6 results 152 | 153 | - Move to: [06-understanding-k6-results](?p=06-understanding-k6-results) 154 | 155 | -------------------------------------------------------------------------------- /slides/06-understanding-k6-results/slides.md: -------------------------------------------------------------------------------- 1 | # k6 results 2 | 3 | --- 4 | 5 | ## The End-of-test summary report 6 | 7 | Here's that output again: 8 | 9 | ```shell 10 | $ k6 run test.js 11 | 12 | /\ |‾‾| /‾‾/ /‾‾/ 13 | /\ / \ | |/ / / / 14 | / \/ \ | ( / ‾‾\ 15 | / \ | |\ \ | (‾) | 16 | / __________ \ |__| \__\ \_____/ .io 17 | 18 | execution: local 19 | script: test.js 20 | output: - 21 | 22 | scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop): 23 | * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s) 24 | 25 | INFO[0001] Hello world! source=console 26 | 27 | running (00m00.7s), 0/1 VUs, 1 complete and 0 interrupted iterations 28 | default ✓ [======================================] 1 VUs 00m00.7s/10m0s 1/1 iters, 1 per VU 29 | 30 | data_received..................: 5.9 kB 9.0 kB/s 31 | data_sent......................: 564 B 860 B/s 32 | http_req_blocked...............: avg=524.18ms min=524.18ms med=524.18ms max=524.18ms p(90)=524.18ms p(95)=524.18ms 33 | http_req_connecting............: avg=123.28ms min=123.28ms med=123.28ms max=123.28ms p(90)=123.28ms p(95)=123.28ms 34 | http_req_duration..............: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 35 | { expected_response:true }...: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 36 | http_req_failed................: 0.00% ✓ 0 ✗ 1 37 | http_req_receiving.............: avg=165µs min=165µs med=165µs max=165µs p(90)=165µs p(95)=165µs 38 | http_req_sending...............: avg=80µs min=80µs med=80µs max=80µs p(90)=80µs p(95)=80µs 39 | http_req_tls_handshaking.......: avg=399.48ms min=399.48ms med=399.48ms max=399.48ms p(90)=399.48ms p(95)=399.48ms 40 | http_req_waiting...............: avg=129.94ms min=129.94ms med=129.94ms max=129.94ms p(90)=129.94ms p(95)=129.94ms 41 | http_reqs......................: 1 1.525116/s 42 | iteration_duration.............: avg=654.72ms min=654.72ms med=654.72ms max=654.72ms p(90)=654.72ms p(95)=654.72ms 43 | iterations.....................: 1 1.525116/s 44 | 45 | ``` 46 | 47 | --- 48 | 49 | ## k6 built-in metrics 50 | 51 | ### Response time 52 | 53 | ```shell 54 | http_req_duration..............: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 55 | ``` 56 | 57 | --- 58 | 59 | ### http_req_duration 60 | 61 | > 💡 `http_req_duration` is the value for *all* requests. 62 | 63 | The line below reports the response time for *only* the successful requests. 64 | 65 | ```shell 66 | { expected_response:true }...: avg=130.19ms min=130.19ms med=130.19ms max=130.19ms p(90)=130.19ms p(95)=130.19ms 67 | ``` 68 | 69 | --- 70 | 71 | ### Error rate 72 | 73 | The `http_req_failed` metric describes the error rate for the test. The error rate is the number of requests that failed during the test as a percentage of the total requests. 74 | 75 | ```shell 76 | http_req_failed................: 0.00% ✓ 0 ✗ 1 77 | ``` 78 | 79 | --- 80 | 81 | ### Number of requests 82 | 83 | The number of total requests sent by all VUs during the test is described in the line below. 84 | 85 | ```shell 86 | http_reqs......................: 1 1.525116/s 87 | ``` 88 | 89 | --- 90 | 91 | ### Iteration duration 92 | 93 | The iteration duration is the amount of time it took for k6 to perform a single loop of your VU code. 94 | 95 | ```shell 96 | iteration_duration.............: avg=654.72ms min=654.72ms med=654.72ms max=654.72ms p(90)=654.72ms p(95)=654.72ms 97 | ``` 98 | 99 | --- 100 | 101 | ### Number of iterations 102 | 103 | The number of iterations describes how many times k6 looped through your script in total, including the iterations for all VUs. 104 | 105 | ```plain 106 | iterations.....................: 1 1.525116/s 107 | ``` 108 | 109 | --- 110 | 111 | ## Adding checks to your k6 script 112 | 113 | - Move to: [07-adding-checks-to-your-script](?p=07-adding-checks-to-your-script) -------------------------------------------------------------------------------- /slides/07-adding-checks-to-your-script/slides.md: -------------------------------------------------------------------------------- 1 | # k6 checks 2 | 3 | --- 4 | 5 | ## Our script 6 | 7 | ```js 8 | import http from 'k6/http'; 9 | 10 | export default function() { 11 | let url = 'https://httpbin.test.k6.io/post'; 12 | let response = http.post(url, 'Hello world!'); 13 | 14 | console.log(response.json().data); 15 | } 16 | ``` 17 | 18 | --- 19 | 20 | ## Add checks to your script 21 | 22 | ```js [1|3-6] 23 | import { check } from 'k6'; 24 | 25 | check(response, { 26 | 'Application says hello': (r) => r.body.includes('Hello world!') 27 | }); 28 | } 29 | ``` 30 | 31 | --- 32 | 33 | ## Let's run our test again! 34 | 35 | Do you remember the command to run the test? 👀 36 | 37 | ```shell 38 | ✓ Application says hello 39 | 40 | checks.........................: 100.00% ✓ 1 ✗ 0 41 | ``` 42 | 43 | --- 44 | 45 | ## Failed checks 46 | 47 | ```js 48 | check(response, { 49 | 'Application says hello': (r) => r.body.includes('Bonjour!') 50 | }); 51 | } 52 | ``` 53 | 54 | --- 55 | 56 | ## Let's run our test again! 57 | 58 | ```shell 59 | ✗ Application says hello 60 | ↳ 0% — ✓ 0 / ✗ 1 61 | 62 | checks.........................: 0.00% ✓ 0 ✗ 1 63 | ``` 64 | 65 | --- 66 | 67 | ## Failed checks are not errors! 68 | 69 | > 💡 To make failing checks stop your test, you can [combine them with thresholds](https://k6.io/docs/using-k6/thresholds/#failing-a-load-test-using-checks). 70 | 71 | --- 72 | 73 | ## Making your scripts realistic with think time 74 | 75 | - Move to: [08-adding-think-time](?p=08-adding-think-time) -------------------------------------------------------------------------------- /slides/08-adding-think-time/slides.md: -------------------------------------------------------------------------------- 1 | # Think time 2 | 3 | --- 4 | 5 | ## What is think time? 6 | 7 | The amount of time that a script pauses during test execution to simulate delays that real users have in the course of using an application. 8 | 9 | --- 10 | 11 | ## When should you use think time? 12 | 13 | - If your test follows a user flow 14 | - If you want to simulate actions that take some time to carry out 15 | - Your load generator, or the machine you're running k6 from, displays high (> 80%) CPU utilization during test execution. 16 | 17 | --- 18 | 19 | ## When should you **NOT** use think time? 20 | 21 | - You want to do a [stress test](https://k6.io/docs/test-types/stress-testing/) 22 | - The API endpoint you're testing experiences a high amount of requests per second in production that occur without delays 23 | - Your load generator can run your test script without crossing the 80% CPU utilization mark. 24 | 25 | --- 26 | 27 | ## k6 Sleep 28 | 29 | ```js [2|11] 30 | import http from 'k6/http'; 31 | import { check, sleep } from 'k6'; 32 | 33 | export default function() { 34 | let url = 'https://httpbin.test.k6.io/post'; 35 | let response = http.post(url, 'Hello world!'); 36 | check(response, { 37 | 'Application says hello': (r) => r.body.includes('Hello world!') 38 | }); 39 | 40 | sleep(1); 41 | } 42 | ``` 43 | 44 | --- 45 | 46 | **Sleep does not affect the response time (`http_req_duration`); the response time is always reported with sleep removed. Sleep is, however, included in the `iteration_duration`.** 47 | 48 | --- 49 | 50 | ## Dynamic think time 51 | 52 | A dynamic think time is more realistic, and simulates real users more accurately. 53 | 54 | --- 55 | 56 | ### Random sleep 57 | 58 | One way to implement dynamic think time is to use the JavaScript `Math.random()` function: 59 | 60 | ```js 61 | sleep(Math.random() * 5); 62 | ``` 63 | 64 | --- 65 | 66 | ### Random sleep between 67 | 68 | ```js [1|3] 69 | import { randomIntBetween } from "https://jslib.k6.io/k6-utils/1.0.0/index.js"; 70 | 71 | sleep(randomIntBetween(1,5)); 72 | ``` 73 | 74 | --- 75 | 76 | ## How much think time should you add? 77 | 78 | The real answer is: it depends. 79 | 80 | --- 81 | 82 | ## Let's move to scaling up your test 83 | 84 | - Move to: [09-load-test-options](?p=09-load-test-options) -------------------------------------------------------------------------------- /slides/09-load-test-options/slides.md: -------------------------------------------------------------------------------- 1 | # k6 Load Test Options 2 | 3 | --- 4 | 5 | ## Script options 6 | 7 | ```js 8 | export let options = { 9 | vus: 10, 10 | iterations: 40, 11 | }; 12 | ``` 13 | 14 | --- 15 | 16 | > 💡 If you only define VUs and no other test options, you may get the following error: 17 | 18 | ```shell 19 | /\ |‾‾| /‾‾/ /‾‾/ 20 | /\ / \ | |/ / / / 21 | / \/ \ | ( / ‾‾\ 22 | / \ | |\ \ | (‾) | 23 | / __________ \ |__| \__\ \_____/ .io 24 | 25 | WARN[0000] the `vus=10` option will be ignored, it only works in conjunction with `iterations`, `duration`, or `stages` 26 | execution: local 27 | script: test.js 28 | output: - 29 | ``` 30 | 31 | --- 32 | 33 | ## Iterations 34 | 35 | ```js 36 | vus: 10, 37 | iterations: 40, 38 | ``` 39 | 40 | > Setting the number of iterations in test options defines it for **all** users. 41 | 42 | --- 43 | 44 | ## Duration 45 | 46 | ```js 47 | vus: 10, 48 | duration: '2m' 49 | ``` 50 | 51 | > Setting the duration instructs k6 to repeat the script for each of the VUs until the duration is reached. 52 | 53 | --- 54 | 55 | ## Iterations and durations 56 | 57 | ```js 58 | vus: 10, 59 | duration: '5m', 60 | iterations: 40, 61 | ``` 62 | 63 | > If you set the duration in conjunction with setting the number of iterations, the value that ends earlier is used. 64 | 65 | --- 66 | 67 | ## Stages 68 | 69 | Defining iterations and duration creates a _simple load profile_ 70 | 71 | ![A simple load profile](../../images/load_profile-no_ramp-up_or_ramp-down.png) 72 | 73 | 74 | --- 75 | 76 | ## Constand load profile 77 | 78 | What if you want to add a ramp-up or ramp-down, so that the profile looks more like this? 79 | 80 | ![Constant load profile, with ramps](../../images/load_profile-constant.png.png) 81 | 82 | 83 | --- 84 | 85 | In that case, you may want to use [stages](https://k6.io/docs/using-k6/options/#stages). 86 | 87 | ```js 88 | export let options = { 89 | stages: [ 90 | { duration: '30m', target: 100 }, 91 | { duration: '1h', target: 100 }, 92 | { duration: '5m', target: 0 }, 93 | ], 94 | }; 95 | ``` 96 | 97 | --- 98 | 99 | ## The full script so far 100 | 101 | If you're using stages, here's what your script should look like so far: 102 | 103 | ```js 104 | import http from 'k6/http'; 105 | import { check, sleep } from 'k6'; 106 | 107 | export let options = { 108 | stages: [ 109 | { duration: '30m', target: 100 }, 110 | { duration: '1h', target: 100 }, 111 | { duration: '5m', target: 0 }, 112 | ], 113 | }; 114 | 115 | export default function() { 116 | let url = 'https://httpbin.test.k6.io/post'; 117 | let response = http.post(url, 'Hello world!'); 118 | check(response, { 119 | 'Application says hello': (r) => r.body.includes('Hello world!') 120 | }); 121 | 122 | sleep(Math.random() * 5); 123 | } 124 | ``` 125 | 126 | --- 127 | 128 | ## Let's set some thresholds 129 | 130 | - Move to [10-setting-test-criteria-with-thresholds](?p=10-setting-test-criteria-with-thresholds) -------------------------------------------------------------------------------- /slides/10-setting-test-criteria-with-thresholds/slides.md: -------------------------------------------------------------------------------- 1 | # k6 Thresholds 2 | 3 | --- 4 | 5 | ## Add thresholds to your script 6 | 7 | ```js [7-10] 8 | export let options = { 9 | stages: [ 10 | { duration: '30m', target: 100 }, 11 | { duration: '1h', target: 100 }, 12 | { duration: '5m', target: 0 }, 13 | ], 14 | thresholds: { 15 | http_req_failed: ['rate<=0.05'], 16 | http_req_duration: ['p(95)<=5000'], 17 | }, 18 | }; 19 | ``` 20 | 21 | --- 22 | 23 | > 💡 Thresholds are **always** based on metrics. 24 | 25 | --- 26 | 27 | ## Types of thresholds 28 | 29 | - Error rate 30 | - Response time 31 | - Checks 32 | 33 | > 💡 Recommended: Use error rate, response time, and checks thresholds in your tests where possible. 34 | 35 | --- 36 | 37 | ### Error rate 38 | 39 | ```js 40 | thresholds: { 41 | http_req_failed: ['rate<=0.05'], 42 | }, 43 | ``` 44 | 45 | --- 46 | 47 | ### Response time 48 | 49 | ```js 50 | thresholds: { 51 | http_req_duration: ['p(95)<=5000'], 52 | }, 53 | ``` 54 | 55 | > 💡 Recommended: Start with the 95th percentile response time. 56 | 57 | --- 58 | 59 | #### Using multiple response time thresholds 60 | 61 | ```js 62 | thresholds: { 63 | http_req_duration: ['p(90) < 400', 'p(95) < 800', 'p(99.9) < 2000'], 64 | }, 65 | ``` 66 | 67 | --- 68 | 69 | ### Checks 70 | 71 | ```js 72 | thresholds: { 73 | checks: ['rate>=0.9'], 74 | }, 75 | ``` 76 | 77 | --- 78 | 79 | ## Aborting test on fail 80 | 81 | ```js 82 | thresholds: { 83 | http_req_failed: [{ 84 | threshold: 'rate<=0.05', 85 | abortOnFail: true, 86 | }], 87 | }, 88 | ``` 89 | 90 | --- 91 | 92 | ## The full script 93 | 94 | ```js 95 | import http from 'k6/http'; 96 | import { check, sleep } from 'k6'; 97 | 98 | export let options = { 99 | stages: [ 100 | { duration: '30m', target: 100 }, 101 | { duration: '1h', target: 100 }, 102 | { duration: '5m', target: 0 }, 103 | ], 104 | thresholds: { 105 | http_req_failed: [{ 106 | threshold: 'rate<=0.05', 107 | abortOnFail: true, 108 | }], 109 | http_req_duration: ['p(95)<=100'], 110 | checks: ['rate>=0.99'], 111 | }, 112 | }; 113 | 114 | export default function() { 115 | let url = 'https://httpbin.test.k6.io/post'; 116 | let response = http.post(url, 'Hello world!'); 117 | check(response, { 118 | 'Application says hello': (r) => r.body.includes('Hello world!') 119 | }); 120 | 121 | sleep(Math.random() * 5); 122 | } 123 | ``` 124 | 125 | --- 126 | 127 | ## Output k6 results in different ways 128 | 129 | - Move to [11-k6-results-output-options](?p=11-k6-results-output-options) -------------------------------------------------------------------------------- /slides/11-k6-results-output-options/slides.md: -------------------------------------------------------------------------------- 1 | # k6 results 2 | 3 | --- 4 | 5 | ## What we have done so far 6 | 7 | ![k6 end of summary report](../../images/k6-end-of-summary.png) 8 | 9 | --- 10 | 11 | ## The output option 12 | 13 | ![k6 output options](../../images/k6-output-options.png) 14 | 15 | --- 16 | 17 | ### Saving k6 results as a CSV 18 | 19 | ```shell 20 | k6 run test.js -o csv=results.csv 21 | ``` 22 | 23 | > 💡 You can also use `--out` instead of `-o`. 24 | 25 | --- 26 | 27 | ### CSV results output format 28 | 29 | ```csv 30 | metric_name,timestamp,metric_value,check,error,error_code,expected_response,group,method,name,proto,scenario,service,status,subproto,tls_version,url,extra_tags 31 | http_reqs,1641298536,1.000000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 32 | http_req_duration,1641298536,114.365000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 33 | http_req_blocked,1641298536,589.667000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 34 | http_req_connecting,1641298536,117.517000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 35 | http_req_tls_handshaking,1641298536,415.043000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 36 | http_req_sending,1641298536,0.251000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 37 | http_req_waiting,1641298536,114.010000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 38 | http_req_receiving,1641298536,0.104000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 39 | http_req_failed,1641298536,0.000000,,,,true,,POST,https://httpbin.test.k6.io/post,HTTP/1.1,default,,200,,tls1.2,https://httpbin.test.k6.io/post, 40 | checks,1641298536,1.000000,Application says hello,,,,,,,,default,,,,,, 41 | vus,1641298536,2.000000,,,,,,,,,,,,,,, 42 | vus_max,1641298536,100.000000,,,,,,,,,,,,,,, 43 | ``` 44 | 45 | --- 46 | 47 | ### Saving k6 results as a JSON 48 | 49 | ```shell 50 | k6 run test.js -o json=results.json 51 | ``` 52 | 53 | ### JSON results output format 54 | 55 | ```JSON 56 | {"type":"Metric","data":{"name":"http_reqs","type":"counter","contains":"default","tainted":null,"thresholds":[],"submetrics":null,"sub":{"name":"","parent":"","suffix":"","tags":null}},"metric":"http_reqs"} 57 | {"type":"Point","data":{"time":"2022-01-05T12:46:23.603474+01:00","value":1,"tags":{"expected_response":"true","group":"","method":"POST","name":"https://httpbin.test.k6.io/post","proto":"HTTP/1.1","scenario":"default","status":"200","tls_version":"tls1.2","url":"https://httpbin.test.k6.io/post"}},"metric":"http_reqs"} 58 | {"type":"Metric","data":{"name":"http_req_duration","type":"trend","contains":"time","tainted":null,"thresholds":["p(95)<=100"],"submetrics":null,"sub":{"name":"","parent":"","suffix":"","tags":null}},"metric":"http_req_duration"} 59 | {"type":"Point","data":{"time":"2022-01-05T12:46:23.603474+01:00","value":118.96,"tags":{"expected_response":"true","group":"","method":"POST","name":"https://httpbin.test.k6.io/post","proto":"HTTP/1.1","scenario":"default","status":"200","tls_version":"tls1.2","url":"https://httpbin.test.k6.io/post"}},"metric":"http_req_duration"} 60 | {"type":"Metric","data":{"name":"http_req_blocked","type":"trend","contains":"time","tainted":null,"thresholds":[],"submetrics":null,"sub":{"name":"","parent":"","suffix":"","tags":null}},"metric":"http_req_blocked"} 61 | ``` 62 | 63 | --- 64 | 65 | ## Grafana Cloud k6 66 | 67 | ![Grafana Cloud k6](../../images/grafana-cloud-k6.png) 68 | 69 | 70 | --- 71 | 72 | ## Let's wrap this up! 73 | 74 | - Move to [end](?p=end) 75 | -------------------------------------------------------------------------------- /slides/end/slides.md: -------------------------------------------------------------------------------- 1 | # The End 🎉 2 | 3 | --- 4 | 5 | ## Further learning 6 | 7 | - [k6.io](https://k6.io/) 8 | - [k6.io/docs](https://k6.io/docs/) 9 | - [youtube.com/k6test](https://www.youtube.com/k6test) 10 | - [github.com/grafana/quickpizza](https://github.com/grafana/quickpizza) 11 | 12 | --- 13 | 14 | ## Thank you! 15 | 16 | ![Bert relaxing](../../images/bert-relaxing.png) -------------------------------------------------------------------------------- /slides/intro/slides.md: -------------------------------------------------------------------------------- 1 | # k6-Learn Workshop 2 | 3 | https://github.com/grafana/k6-learn 4 | 5 | --- 6 | 7 | ## What will you learn today? 8 | 9 | Performance testing and k6! 💜 10 | 11 | --- 12 | 13 | ## Structure (Beginner) 14 | 15 | I. Performance Testing Principles 16 | - [Introduction to performance testing](?p=01-introduction-to-performance-testing) 17 | - [Frontend vs backend performance testing](?p=02-frontend-vs-backend-performance-testing) 18 | - [Load testing and overview of the load testing process](?p=03-load-testing) 19 | 20 | --- 21 | 22 | ## Structure (Beginner) 23 | 24 | II. k6 Foundations 25 | - [Getting started with k6 OSS](?p=04-getting-started-with-k6-oss) 26 | - [The k6 CLI](?p=05-the-k6-cli) 27 | - [Understanding k6 results](?p=06-understanding-k6-results) 28 | - [Adding checks to your script](?p=07-adding-checks-to-your-script) 29 | - [Adding think time using sleep](?p=08-adding-think-time) 30 | - [k6 load test options](?p=09-load-test-options) 31 | - [Setting test criteria with thresholds](?p=10-setting-test-criteria-with-thresholds) 32 | - [k6 results output options](?p=11-k6-results-output-options) 33 | 34 | --- 35 | 36 | Please: if you have experience with k6, help others during the workshop 🙏 37 | 38 | --- 39 | 40 | ## How will this workshop works? 41 | 42 | 1. I'll go over some basic concepts 43 | 2. We'll do the exercises together 44 | 3. Ask questions! 45 | 46 | --- 47 | 48 | ## Workshop Requirements 49 | 50 | You will need: 51 | 52 | - `k6` OSS installed locally in your machines 53 | - Basic level of JavaScript 54 | 55 | --- 56 | 57 | ## Let's get started! 58 | 59 | - Move to: [01-introduction-to-performance-testing](?p=01-introduction-to-performance-testing) - for beginner 60 | -------------------------------------------------------------------------------- /vite.config.js: -------------------------------------------------------------------------------- 1 | export default { 2 | publicDir: 'slides', 3 | } --------------------------------------------------------------------------------