Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

The Goose Book

Have you ever been attacked by a goose?

What Is Goose?

Goose is a Rust load testing tool inspired by Locust. User behavior is defined with standard Rust code. Load tests are applications that have a dependency on the Goose library. Web requests are made with the Reqwest HTTP Client.

Request statistics report

Advantages

Goose generates at least 11x as much traffic as Locust per-CPU-core, with even larger gains for more complex load tests (such as those using third-party libraries to scrape form content). While Locust requires you to manage a distributed load test simply to use multiple CPU cores on a single server, Goose leverages all available CPU cores with a single process, drastically simplifying the process for running larger load tests. Ongoing improvements to the codebase continue to bring new features and faster performance. Goose scales far better than Locust, efficiently using available resources to accomplish its goal. It also supports asynchronous processes enabling many more simultaneous processes to ramp up thousands of users from a single server, easily and consistently.

Goose’s distributed testing design is similar to Locust’s, in that it uses a one Manager to many Workers model. However, unlike Locust, you do not need to spin up a distributed load test to leverage all available cores on a single server, as a single Goose process will fully leverage all available cores. Goose distributed load tests scale near-perfectly as once started each Worker performs its load test without any direction from the Manager, and the Manager simply collects statistics from all the Workers for final reporting. In other words, one Manager controlling eight Workers on a single 8-CPU-core server generates the same amount of load as a single standalone Goose process independently leveraging all eight cores.

Goose has a number of unique debugging and logging mechanisms not found in other load testing tools, simplifying the writing of load tests and the analysis of results. Goose also provides more comprehensive metrics with multiple simple views into the data, and makes it easy to confirm that the load test is doing what you expect it to as you scale it up or down. It exposes the algorithms used to allocate scenarios and contained transactions, giving more granular control over the order and consistency of operations, important for easily repeatable testing.

What's Missing

At this time, the biggest missing feature of Goose is a UI for controlling and monitoring load tests, but this is a work in progress. A recently completed first step toward this goal was the addition of an optional HTML report generated at the end of a load test.

Brought To You By

Goose development is sponsored by Tag1 Consulting, led by Tag1's CEO, Jeremy Andrews, along with many community contributions. Tag1 is a member of the Rust Foundation.

Additional Documentation

Requirements

  • In order to write load tests, you must first install Rust.

  • Goose load tests are managed with Cargo, the Rust package manager.

Goose requires a minimum rustc version of 1.70.0 or later.

Glossary

Controller

An interface that allows real-time control of a running Goose load test. Goose provides both Telnet and WebSocket controllers for dynamically adjusting test parameters like user count, hatch rate, and runtime during execution.

Coordinated Omission

A phenomenon that occurs in load testing when the measurement system inadvertently excludes the results of requests that were affected by server slowdowns, leading to misleadingly optimistic performance metrics. Goose includes Coordinated Omission Mitigation functionality to detect and correct for this.

Gaggle

Goose's distributed load testing functionality that allows running coordinated load tests across multiple machines. A Gaggle consists of one Manager and multiple Workers. Note: Gaggle support was temporarily removed in Goose 0.17.0.

GooseAttack

A load test defined by one or more Scenarios with one or more Transactions.

GooseConfiguration

A structure that defines all configuration options for a Goose load test, including user count, hatch rate, runtime, host, and various other parameters. Can be set via command line arguments, configuration files, or programmatically.

GooseError

A helper that defines all possible errors returned by Goose. A Transaction returns a TransactionResult, which is either Ok(()) or Err(GooseError).

GooseUser

A thread that repeatedly runs a single scenario for the duration of the load test. For example, when Goose starts, you may use the --users command line option to configure how many GooseUser threads are started. This is not intended to be a 1:1 correlation between GooseUsers and real website users.

Hatch Rate

The rate at which new GooseUsers are launched during the ramp-up phase of a load test, typically specified as users per second.

Request

A single request based around HTTP verbs.

Scenario

A scenario is a collection of transactions (aka steps) a user would undertake to achieve a specific user journey.

Test Plan

A flexible approach to scheduling load test phases, allowing you to define complex load patterns like gradual ramp-up, sustained load periods, spike testing, and graceful ramp-down. Test plans use the format users,duration;users,duration to specify multiple phases.

Throttle

A mechanism to limit the request rate of individual GooseUsers, helping simulate more realistic user behavior by introducing delays between requests rather than sending requests as fast as possible.

Transaction

A transaction is a collection of one or more requests and any desired validation. For example, this may include loading the front page and all contained static assets, logging into the website, or adding one or more items to a shopping chart. Transactions typically include assertions or expectation validation.

TransactionResult

A Result returned by Transaction functions. A transaction can return Ok(()) on success, or Err(GooseError) on failure.

Weight

A value that controls the frequency with which a Transaction or Scenario runs, relative to the other transactions in the same scenario, or scenarios in the same load test. For example, if one transaction has a weight of 3 and another transaction in the same scenario has a weight of 1, the first transaction will run 3 times as often as the second.

Getting Started

This first chapter of the Goose Book provides a high-level overview of writing and running Goose load tests. If you're new to Goose, this is the place to start.

The Importance Of Load Testing

Load testing can help prevent website outages, stress test code changes, and identify bottlenecks. It can also quickly perform functional regression testing. The ability to run the same test repeatedly gives critical insight into the impact of changes to the code and/or systems.

When to Use Goose

Goose is particularly well-suited for:

  • Complex User Workflows: Testing multi-step processes like checkout flows, user registration, or content management workflows
  • API Load Testing: Validating REST APIs, GraphQL endpoints, or microservice interactions under load
  • Performance Regression Testing: Integrating into CI/CD pipelines to catch performance regressions before deployment
  • Capacity Planning: Understanding how your infrastructure scales and where bottlenecks occur
  • Coordinated Omission Detection: Identifying when server slowdowns affect more users than simple metrics suggest

Goose vs Other Load Testing Tools

Unlike tools that focus purely on HTTP request volume, Goose excels at:

  • Stateful Testing: Maintaining sessions, cookies, and authentication across requests
  • Realistic Load Patterns: Simulating actual user behavior rather than just hammering endpoints
  • Developer-Friendly: Written in Rust with type safety and excellent error handling
  • Detailed Analysis: Advanced metrics that reveal hidden performance issues
  • Flexibility: Custom logic, data-driven tests, and complex scenarios

Prerequisites

Before diving into Goose, you should have:

  • Basic Rust Knowledge: Familiarity with Rust syntax, async/await, and error handling
  • HTTP Understanding: Knowledge of HTTP methods, status codes, and web application architecture
  • Testing Mindset: Understanding of what you want to test and what constitutes success

Don't worry if you're new to load testing - Goose's approach will guide you toward writing realistic and valuable tests.

Creating A Load Test

Cargo

Cargo is the Rust package manager. To create a new load test, use Cargo to create a new application (you can name your application anything, we've generically selected loadtest):

$ cargo new loadtest
     Created binary (application) `loadtest` package
$ cd loadtest/

This creates a new directory named loadtest/ containing loadtest/Cargo.toml and loadtest/src/main.rs. Edit Cargo.toml and add Goose and Tokio under the dependencies heading:

[dependencies]
goose = "^0.18"
tokio = "^1"

At this point it's possible to compile all dependencies, though the resulting binary only displays "Hello, world!":

$ cargo run
    Updating crates.io index
  Downloaded goose v0.18.1
      ...
   Compiling goose v0.18.1
   Compiling loadtest v0.1.0 (/home/jandrews/devel/rust/loadtest)
    Finished dev [unoptimized + debuginfo] target(s) in 52.97s
     Running `target/debug/loadtest`
Hello, world!

Creating the load test

To create an actual load test, you first have to add the following boilerplate to the top of src/main.rs to make Goose's functionality available to your code:

use goose::prelude::*;

Note: Using the above prelude automatically adds the following use statements necessary when writing a load test, so you don't need to manually add all of them:

use crate::config::{GooseDefault, GooseDefaultType};
use crate::goose::{
    GooseMethod, GooseRequest, GooseUser, Scenario, Transaction, TransactionError,
    TransactionFunction, TransactionResult,
};
use crate::metrics::{GooseCoordinatedOmissionMitigation, GooseMetrics};
use crate::{scenario, transaction, GooseAttack, GooseError, GooseScheduler};

Then create a new load testing function. For our example we're simply going to load the front page of the website we're load-testing. Goose passes all load testing functions a mutable pointer to a GooseUser object, which is used to track metrics and make web requests. Thanks to the Reqwest library, the Goose client manages things like cookies, headers, and sessions for you. Load testing functions must be declared async, ensuring that your simulated users don't become CPU-locked.

In load test functions you typically do not set the host, and instead configure the host at run time, so you can easily run your load test against different environments without recompiling. Relative paths (not starting with a /) should be used.

The following loadtest_index function simply loads the front page of our web page:

use goose::prelude::*;

async fn loadtest_index(user: &mut GooseUser) -> TransactionResult {
    let _goose_metrics = user.get("").await?;

    Ok(())
}

The function is declared async so that we don't block a CPU-core while loading web pages. All Goose load test functions are passed in a mutable reference to a GooseUser object, and return a TransactionResult which is either an empty Ok(()) on success, or a TransactionError on failure. We use the GooseUser object to make requests, in this case we make a GET request for the front page, specified with an empty path "". The .await frees up the CPU-core while we wait for the web page to respond, and the trailing ? unwraps the response, returning any unexpected errors that may be generated by this request.

When the GET request completes, Goose returns metrics which we store in the _goose_metrics variable. The variable is prefixed with an underscore (_) to tell the compiler we are intentionally not using the results. Finally, after making a single successful request, we return Ok(()) to let Goose know this transaction function completed successfully.

Now we have to tell Goose about our new transaction function. Edit the main() function, setting a return type and replacing the hello world text as follows:

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    GooseAttack::initialize()?
        .register_scenario(scenario!("LoadtestTransactions")
            .register_transaction(transaction!(loadtest_index))
        )
        .execute()
        .await?;

    Ok(())
}

The #[tokio::main] at the beginning of this example is a Tokio macro necessary because Goose is an asynchronous library, allowing (and requiring) us to declare the main() function of our load test application as async.

If you're new to Rust, main()'s return type of Result<(), GooseError> may look strange. It essentially says that main will return nothing (()) on success, and will return a GooseError on failure. This is helpful as several of GooseAttack's methods can fail, returning an error. In our example, initialize() and execute() each may fail. The ? that follows the method's name tells our program to exit and return an error on failure, otherwise continue on. Note that the .execute() method is asynchronous, so it must be followed with .await, and as it can return an error it also has a ?. The final line, Ok(()) returns the empty result expected on success.

And that's it, you've created your first load test! Read on to see how to run it and what it does.

Validating Requests

Goose Eggs

Goose-eggs are helpful in writing Goose load tests.

To leverage Goose Eggs when writing your load test, include the crate in the dependency section of your `Cargo.toml.

[dependencies]
goose-eggs = "0.4"

For example, to use the Goose Eggs validation functions, bring the Validate structure and either the validate_page or the validate_and_load_static_assets function into scope:

use goose_eggs::{validate_and_load_static_assets, Validate};

Now, it is simple to verify that we received a 200 HTTP response status code, and that the text Gander appeared somewhere on the page as expected:

let goose = user.get("/goose/").await?;

let validate = &Validate::builder()
    .status(200)
    .text("Gander")
    .build();

validate_and_load_static_assets(user, goose, &validate).await?;

Whether or not validation passed or failed will be visible in the Goose metrics when the load test finishes. You can enable the debug log to gain more insight into failures.

Read the goose-eggs documentation to learn about other helpful functions useful in writing load tests, as well as other validation helpers, such as headers, header values, the page title, and whether the request was redirected.

Running A Load Test

We will use Cargo to run our example load test application. It's best to get in the habit of setting the --release option whenever compiling or running load tests.

$ cargo run --release
    Finished release [optimized] target(s) in 0.06s
     Running `target/release/loadtest`
07:08:43 [INFO] Output verbosity level: INFO
07:08:43 [INFO] Logfile verbosity level: WARN
07:08:43 [INFO] users defaulted to number of CPUs = 10
Error: InvalidOption { option: "--host", value: "", detail: "A host must be defined via the --host option, the GooseAttack.set_default() function, or the Scenario.set_host() function (no host defined for LoadtestTransactions)." }

The load test fails with an error as it hasn't been told the host you want to load test.

So, let's try again, this time passing in the --host flag. We will also add the --report-file flag with a .html file extension, which will generate an HTML report, and --no-reset-metrics to preserve all information including the load test startup. The same information will also be printed to the command line (without graphs). After running for a few seconds, press ctrl-c one time to gracefully stop the load test:

% cargo run --release -- --host http://umami.ddev.site --report-file=report.html --no-reset-metrics
    Finished release [optimized] target(s) in 0.06s
     Running `target/release/loadtest --host 'http://umami.ddev.site' --report-file=report.html --no-reset-metrics`
08:53:48 [INFO] Output verbosity level: INFO
08:53:48 [INFO] Logfile verbosity level: WARN
08:53:48 [INFO] users defaulted to number of CPUs = 10
08:53:48 [INFO] no_reset_metrics = true
08:53:48 [INFO] report_file = report.html
08:53:48 [INFO] global host configured: http://umami.ddev.site
08:53:48 [INFO] allocating transactions and scenarios with RoundRobin scheduler
08:53:48 [INFO] initializing 10 user states...
08:53:48 [INFO] Telnet controller listening on: 0.0.0.0:5116
08:53:48 [INFO] WebSocket controller listening on: 0.0.0.0:5117
08:53:48 [INFO] entering GooseAttack phase: Increase
08:53:48 [INFO] [user 1]: launching user from LoadtestTransactions
08:53:49 [INFO] [user 2]: launching user from LoadtestTransactions
08:53:50 [INFO] [user 3]: launching user from LoadtestTransactions
08:53:51 [INFO] [user 4]: launching user from LoadtestTransactions
08:53:52 [INFO] [user 5]: launching user from LoadtestTransactions
08:53:53 [INFO] [user 6]: launching user from LoadtestTransactions
08:53:54 [INFO] [user 7]: launching user from LoadtestTransactions
08:53:55 [INFO] [user 8]: launching user from LoadtestTransactions
08:53:56 [INFO] [user 9]: launching user from LoadtestTransactions
08:53:57 [INFO] [user 10]: launching user from LoadtestTransactions
All 10 users hatched.

08:53:58 [INFO] entering GooseAttack phase: Maintain
^C08:54:25 [WARN] caught ctrl-c, stopping...

As of Goose 0.16.0, by default all INFO and higher level log messages are displayed on the console while the load test runs. You can disable these messages with the -q (--quiet) flag. Or, you can display low-level debug with the -v (--verbose) flag.

HTML report

When the load tests finishes shutting down, it will display some ASCII metrics on the CLI and an HTML report will be created in the local directory named report.html as was configured above. The graphs and tables found in the HTML report are what are demonstrated below:

HTML report header section

By default, Goose will hatch 1 GooseUser per second, up to the number of CPU cores available on the server used for load testing. In the above example, the loadtest was run from a laptop with 10 CPU cores, so it took 10 seconds to hatch all users.

By default, after all users are launched Goose will flush all metrics collected during the launching process (we used the --no-reset-metrics flag to disable this behavior) so the summary metrics are collected with all users running. If we'd not used --no-reset-metrics, before flushing the metrics they would have been displayed to the console so the data is not lost.

Request metrics

HTML report request metrics section

The per-request metrics are displayed first. Our single transaction makes a GET request for the empty "" path, so it shows up in the metrics as simply GET . The table in this section displays the total number of requests made (8,490), the average number of requests per second (229.46), and the average number of failed requests per second (0).

Additionally it shows the average time required to load a page (37.85 milliseconds), the minimum time to load a page (12 ms) and the maximum time to load a page (115 ms).

If our load test made multiple requests, the Aggregated line at the bottom of this section would show totals and averages of all requests together. Because we only make a single request, this row is identical to the per-request metrics.

Response time metrics

HTML report response times metrics section

The second section displays the average time required to load a page. The table in this section is showing the slowest page load time for a range of percentiles. In our example, in the 50% fastest page loads, the slowest page loaded in 37 ms. In the 70% fastest page loads, the slowest page loaded in 42 ms, etc. The graph, on the other hand, is displaying the average response time aggregated across all requests.

Status code metrics

HTML report status code metrics section

The third section is a table showing all response codes received for each request. In this simple example, all 8,490 requests received a 200 OK response.

Transaction metrics

HTML report transaction metrics section

Next comes per-transaction metrics, starting with the name of our Scenario, LoadtestTransactions. Individual transactions in the Scenario are then listed in the order they are defined in our load test. We did not name our transaction, so it simply shows up as 0.0. All defined transactions will be listed here, even if they did not run, so this can be useful to confirm everything in your load test is running as expected. Comparing the transaction metrics metrics collected for 0.0 to the per-request metrics collected for GET /, you can see that they are the same. This is because in our simple example, our single transaction only makes one request.

In real load tests, you'll most likely have multiple scenarios each with multiple transactions, and Goose will show you metrics for each along with an aggregate of them all together.

Scenario metrics

Per-scenario metrics follow the per-transaction metrics. This page has has not yet been updated to include a proper example of Scenario metrics.

User metrics

HTML report user metrics section

Finally comes a chart showing how many users were running during the load test. You can clearly see the 10 users starting 1 per second at the start of the load test, as well as the final second when users quickly stopped.

Refer to the examples included with Goose for more complicated and useful load test examples.

Run-Time Options

The -h flag will show all run-time configuration options available to Goose load tests. For example, you can pass the -h flag to our example loadtest as follows, cargo run --release -- -h:

Usage: target/release/loadtest [OPTIONS]

Goose is a modern, high-performance, distributed HTTP(S) load testing tool,
written in Rust. Visit https://book.goose.rs/ for more information.

The following runtime options are available when launching a Goose load test:

Optional arguments:
  -h, --help                  Displays this help
  -V, --version               Prints version information
  -l, --list                  Lists all transactions and exits

  -H, --host HOST             Defines host to load test (ie http://10.21.32.33)
  -u, --users USERS           Sets concurrent users (default: number of CPUs)
  -r, --hatch-rate RATE       Sets per-second user hatch rate (default: 1)
  -s, --startup-time TIME     Starts users for up to (30s, 20m, 3h, 1h30m, etc)
  -t, --run-time TIME         Stops load test after (30s, 20m, 3h, 1h30m, etc)
  -G, --goose-log NAME        Enables Goose log file and sets name
  -g, --log-level             Increases Goose log level (-g, -gg, etc)
  -q, --quiet                 Decreases Goose verbosity (-q, -qq, etc)
  -v, --verbose               Increases Goose verbosity (-v, -vv, etc)

Metrics:
  --running-metrics TIME      How often to optionally print running metrics
  --no-reset-metrics          Doesn't reset metrics after all users have started
  --no-metrics                Doesn't track metrics
  --no-transaction-metrics    Doesn't track transaction metrics
  --no-scenario-metrics       Doesn't track scenario metrics
  --no-print-metrics          Doesn't display metrics at end of load test
  --no-error-summary          Doesn't display an error summary
  --report-file NAME          Create reports, can be used multiple times (supports .html, .htm, .md, .json)
  --no-granular-report        Disable granular graphs in report file
  -R, --request-log NAME      Sets request log file name
  --request-format FORMAT     Sets request log format (csv, json, raw, pretty)
  --request-body              Include the request body in the request log
  -T, --transaction-log NAME  Sets transaction log file name
  --transaction-format FORMAT Sets log format (csv, json, raw, pretty)
  -S, --scenario-log NAME     Sets scenario log file name
  --scenario-format FORMAT    Sets log format (csv, json, raw, pretty)
  -E, --error-log NAME        Sets error log file name
  --error-format FORMAT       Sets error log format (csv, json, raw, pretty)
  -D, --debug-log NAME        Sets debug log file name
  --debug-format FORMAT       Sets debug log format (csv, json, raw, pretty)
  --no-debug-body             Do not include the response body in the debug log
  --no-status-codes           Do not track status code metrics

Advanced:
  --test-plan "TESTPLAN"      Defines a more complex test plan ("10,60s;0,30s")
  --iterations ITERATIONS     Sets how many times to run scenarios then exit
  --scenarios "SCENARIO"      Limits load test to only specified scenarios
  --scenarios-list            Lists all scenarios and exits
  --no-telnet                 Doesn't enable telnet Controller
  --telnet-host HOST          Sets telnet Controller host (default: 0.0.0.0)
  --telnet-port PORT          Sets telnet Controller TCP port (default: 5116)
  --no-websocket              Doesn't enable WebSocket Controller
  --websocket-host HOST       Sets WebSocket Controller host (default: 0.0.0.0)
  --websocket-port PORT       Sets WebSocket Controller TCP port (default: 5117)
  --no-autostart              Doesn't automatically start load test
  --no-gzip                   Doesn't set the gzip Accept-Encoding header
  --timeout VALUE             Sets per-request timeout, in seconds (default: 60)
  --co-mitigation STRATEGY    Sets coordinated omission mitigation strategy
  --throttle-requests VALUE   Sets maximum requests per second
  --sticky-follow             Follows base_url redirect with subsequent requests
  --accept-invalid-certs      Disables validation of https certificates

All of the above configuration options are defined in the developer documentation.

Common Run Time Options

As seen on the previous page, Goose has a lot of run time options which can be overwhelming. The following are a few of the more common and more important options to be familiar with. In these examples we only demonstrate one option at a time, but it's generally useful to combine many options.

Host to load test

Load test plans typically contain relative paths, and so Goose must be told which host to run the load test against in order for it to start. This allows a single load test plan to be used for testing different environments, for example "http://local.example.com", "https://qa.example.com", and "https://www.example.com".

Host example

Load test the https://www.example.com domain.

cargo run --release -- -H https://www.example.com

How many users to simulate

By default, Goose will launch one user per available CPU core. Often you will want to simulate considerably more users than this, and this can be done by setting the "--user" run time option.

(Alternatively, you can use --test-plan to build both simple and more complex traffic patterns that can include a varying number of users.)

Users example

Launch 1,000 GooseUsers.

cargo run --release -- -u 1000

Controlling how long it takes Goose to launch all users

There are several ways to configure how long Goose will take to launch all configured GooseUsers. For starters, you can user either --hatch-rate or --startup-time, but not both together. Alternatively, you can use --test-plan to build both simple and more complex traffic patterns that can include varying launch rates.

Specifying the hatch rate

By default, Goose starts one GooseUser per second. So if you configure --users to 10 it will take ten seconds to fully start the load test. If you set --hatch-rate 5 then Goose will start 5 users every second, taking two seconds to start up. If you set --hatch-rate 0.5 then Goose will start 1 user every 2 seconds, taking twenty seconds to start all 10 users.

(The configured hatch rate is a best effort limit, Goose will not start users faster than this but there is no guarantee that your load test server is capable of starting users as fast as you configure.)

Hatch rate example

Launch one user every two seconds.

cargo run --release -- -r .5

Specifying the total startup time

Alternatively, you can tell Goose how long you'd like it to take to start all GooseUsers. So, if you configure --users to 10 and set --startup-time 10 it will launch 1 user every second. If you set --startup-time 1m it will start 1 user every 6 seconds, starting all users over one minute. And if you set --startup-time 2s it will launch five users per second, launching all users in two seconds.

(The configured startup time is a best effort limit, Goose will not start users faster than this but there is no guarantee that your load test server is capable of starting users as fast as you configure.)

Startup time example

Launch all users in 5 seconds.

cargo run --release -- -s 5

Specifying how long the load test will run

The --run-time option is not affected by how long Goose takes to start up. Thus, if you configure a load test with --users 100 --startup-time 30m --run-time 5m Goose will run for a total of 35 minutes, first ramping up for 30 minutes and then running at full load for 5 minutes. If you want Goose to exit immediately after all users start, you can set a very small run time, for example --users 100 --hatch-rate .25 --run-time 1s.

Alternatively, you can use --test-plan to build both simple and more complex traffic patterns and can define how long the load test runs.

A final option is to instead use the --iterations option to configure how many times GooseUsers will run through their assigned Scenario before exiting.

If you do not configure a run time, Goose will run until it's canceled with ctrl-c.

Run time example

Run the load test for 30 minutes.

cargo run --release -- -t 30m

Iterations example

Each GooseUser will take as long as it takes to fully run its assigned Scenario 5 times and then stop.

cargo run --release -- --iterations 5

Writing An HTML-formatted Report

By default, Goose displays text-formatted metrics when a load test finishes.

It can also optionally write one or more reports in HTML, Markdown, or JSON format. For that, you need to provide one or more --report-file <FILE> run-time options. All requested reports will be written.

The value of <FILE> is an absolute or relative path to the report file to generate. The file extension will evaluate the type of report to write. Any file that already exists at the specified path will be overwritten.

For more information, see Metrics Reports.

Requests per second graph

HTML report example

Write an HTML-formatted report to report.html when the load test finishes.

cargo run --release -- --report-file report.html

HTML & Markdown report example

Write a Markdown and an HTML-formatted report when the load test finishes.

cargo run --release -- --report-file report.md --report-file report.html

Test Plan

A load test that ramps up to full strength and then runs for a set amount of time can be configured by combining the --startup-time or --hatch-rate options together with the --users and --run-time options. For more complex load patterns you must instead use the --test-plan option.

A test plan is defined as a series of numerical pairs that each defines a number of users, and the amount of time to ramp to this number of users. For example, 10,60s means "launch 10 users over 60 seconds". By stringing together multiple pairs separated by a semicolon you can define more complex test plans. For example, 10,1m;10,5m;0,0s means "launch 10 users over 1 minute, continue with 10 users for 5 minutes, then shut down the load test as quickly as possible".

The amount of time can be defined in seconds (e.g. 10,5s), minutes (e.g. 10,15m) or hours (e.g. 10,1h). The "s/m/h" notation is optional and seconds will be assumed if omitted. However, the explicit notation is recommended, since Goose will be able to detect any mistakes if used.

Simple Example

The following command tells Goose to start 10 users over 60 seconds and then to run for 5 minutes before shutting down:

$ cargo run --release -- -H http://local.dev/ --startup-time 1m --users 10 --run-time 5m --no-reset-metrics

The exact same behaviour can be defined with the following test plan:

$ cargo run --release -- -H http://local.dev/ --test-plan "10,1m;10,5m;0,0s"

Simple test plan

Ramp Down Example

Goose will stop a load test as quickly as it can when the specified --run-time completes. To instead configure a load test to ramp down slowly you can use a test plan. In the following example, Goose starts 1000 users in 2 minutes and then slowly stops them over 500 seconds (stopping 2 users per second):

$ cargo run --release -- -H http://local.dev/ --test-plan "1000,2m;0,500s"

Ramp down test plan

Load Spike Example

Another possibility when specifying a test plan is to add load spikes into otherwise steady load. For example, in the following example Goose starts 500 users over 5 minutes and lets it run with a couple of traffic spikes to 2,500 users:

$ cargo run --release -- -H http://local.dev/ --test-plan "500,5m;500,5m;2500,45s;500,45s;500,5m;2500,45s;500,45s;500,5m;0,0s"

Load spike test plan

Internals

Internally, Goose converts the test plan into a vector of usize tuples, Vec<(usize, usize)>, where the first integer reflects the number of users to be running and the second integer reflects the time in milliseconds. You can see the internal representation when you start a load test, for example:

% cargo run --release --example simple -- --no-autostart --test-plan "100,30s;100,1h" | grep test_plan
13:54:35 [INFO] test_plan = GooseTestPlan { test_plan: [(100, 30000), (100, 3600000)] }

Throttling Requests

By default, Goose will generate as much load as it can. If this is not desirable, the throttle allows optionally limiting the maximum number of requests per second made during a load test. This can be helpful to ensure consistency when running a load test from multiple different servers with different available resources.

The throttle is specified as an integer and imposes a maximum number of requests, not a minimum number of requests.

Example

In this example, Goose will launch 100 GooseUser threads, but the throttle will prevent them from generating a combined total of more than 5 requests per second.

$ cargo run --release -- -H http://local.dev/ -u100 -r20 --throttle-requests 5

Throttled load test

Limiting Which Scenarios Run

It can often be useful to run only a subset of the Scenarios defined by a load test. Instead of commenting them out in the source code and recompiling, the --scenarios run-time option allows you to dynamically control which Scenarios are running.

Listing Scenarios By Machine Name

To ensure that each scenario has a unique name, you must use the machine name of the scenario when filtering which are running. For example, using the Umami example enable the --scenarios-list flag:

% cargo run --release --example umami -- --scenarios-list
    Finished release [optimized] target(s) in 0.15s
     Running `target/release/examples/umami --scenarios-list`
05:24:03 [INFO] Output verbosity level: INFO
05:24:03 [INFO] Logfile verbosity level: WARN
05:24:03 [INFO] users defaulted to number of CPUs = 10
05:24:03 [INFO] iterations = 0
Scenarios:
 - adminuser: ("Admin user")
 - anonymousenglishuser: ("Anonymous English user")
 - anonymousspanishuser: ("Anonymous Spanish user")

What Is A Machine Name: It is possible to name your Scenarios pretty much anything you want in your load test, including even using the same identical name for multiple Scenarios. A machine name ensures that you can still identify each Scenario uniquely, and without any special characters that can be difficult or insecure to pass through the command line. A machine name is made up of only the alphanumeric characters found in your Scenario's full name, and optionally with a number appended to differentiate between multiple Scenarios that would otherwise have the same name.

In the following example, we have three very similarly named Scenarios. One simply has an extra white space between words. The second has an airplane emoticon in the name. Both the extra space and the airplane symbol are stripped away from the machine name as they are not alphanumerics, and instead _1 and _2 are appended to the end to differentiate:

Scenarios:
- loadtesttransactions: ("LoadtestTransactions")
- loadtesttransactions_1: ("Loadtest Transactions")
- loadtesttransactions_2: ("LoadtestTransactions ✈️")

Running Scenarios By Machine Name

It is now possible to run any subset of the above scenarios by passing a comma separated list of machine names with the --scenarios run time option. Goose will match what you have typed against any machine name containing all or some of the typed text, so you do not have to type the full name. For example, to run only the two anonymous Scenarios, you could add --scenarios anon:

% cargo run --release --example umami -- --hatch-rate 10 --scenarios anon
    Finished release [optimized] target(s) in 0.15s
     Running `target/release/examples/umami --hatch-rate 10 --scenarios anon`
05:50:17 [INFO] Output verbosity level: INFO
05:50:17 [INFO] Logfile verbosity level: WARN
05:50:17 [INFO] users defaulted to number of CPUs = 10
05:50:17 [INFO] hatch_rate = 10
05:50:17 [INFO] iterations = 0
05:50:17 [INFO] scenarios = Scenarios { active: ["anon"] }
05:50:17 [INFO] host for Anonymous English user configured: https://drupal-9.ddev.site/
05:50:17 [INFO] host for Anonymous Spanish user configured: https://drupal-9.ddev.site/
05:50:17 [INFO] host for Admin user configured: https://drupal-9.ddev.site/
05:50:17 [INFO] allocating transactions and scenarios with RoundRobin scheduler
05:50:17 [INFO] initializing 10 user states...
05:50:17 [INFO] WebSocket controller listening on: 0.0.0.0:5117
05:50:17 [INFO] Telnet controller listening on: 0.0.0.0:5116
05:50:17 [INFO] entering GooseAttack phase: Increase
05:50:17 [INFO] launching user 1 from Anonymous Spanish user...
05:50:18 [INFO] launching user 2 from Anonymous English user...
05:50:18 [INFO] launching user 3 from Anonymous Spanish user...
05:50:18 [INFO] launching user 4 from Anonymous English user...
05:50:18 [INFO] launching user 5 from Anonymous Spanish user...
05:50:18 [INFO] launching user 6 from Anonymous English user...
05:50:18 [INFO] launching user 7 from Anonymous Spanish user...
^C05:50:18 [WARN] caught ctrl-c, stopping...

Or, to run only the "Anonymous Spanish user" and "Admin user" Scenarios, you could add --senarios "spanish,admin":

% cargo run --release --example umami -- --hatch-rate 10 --scenarios "spanish,admin"
   Compiling goose v0.18.1 (/Users/jandrews/devel/goose)
    Finished release [optimized] target(s) in 11.79s
     Running `target/release/examples/umami --hatch-rate 10 --scenarios spanish,admin`
05:53:45 [INFO] Output verbosity level: INFO
05:53:45 [INFO] Logfile verbosity level: WARN
05:53:45 [INFO] users defaulted to number of CPUs = 10
05:53:45 [INFO] hatch_rate = 10
05:53:45 [INFO] iterations = 0
05:53:45 [INFO] scenarios = Scenarios { active: ["spanish", "admin"] }
05:53:45 [INFO] host for Anonymous English user configured: https://drupal-9.ddev.site/
05:53:45 [INFO] host for Anonymous Spanish user configured: https://drupal-9.ddev.site/
05:53:45 [INFO] host for Admin user configured: https://drupal-9.ddev.site/
05:53:45 [INFO] allocating transactions and scenarios with RoundRobin scheduler
05:53:45 [INFO] initializing 10 user states...
05:53:45 [INFO] Telnet controller listening on: 0.0.0.0:5116
05:53:45 [INFO] WebSocket controller listening on: 0.0.0.0:5117
05:53:45 [INFO] entering GooseAttack phase: Increase
05:53:45 [INFO] launching user 1 from Anonymous Spanish user...
05:53:45 [INFO] launching user 2 from Admin user...
05:53:45 [INFO] launching user 3 from Anonymous Spanish user...
05:53:45 [INFO] launching user 4 from Anonymous Spanish user...
05:53:45 [INFO] launching user 5 from Anonymous Spanish user...
05:53:45 [INFO] launching user 6 from Anonymous Spanish user...
05:53:45 [INFO] launching user 7 from Anonymous Spanish user...
05:53:45 [INFO] launching user 8 from Anonymous Spanish user...
05:53:45 [INFO] launching user 9 from Anonymous Spanish user...
05:53:46 [INFO] launching user 10 from Anonymous Spanish user...
^C05:53:46 [WARN] caught ctrl-c, stopping...

When the load test completes, you can refer to the Scenario metrics to confirm which Scenarios were enabled, and which were not.

Custom Run Time Options

It can sometimes be necessary to add custom run-time options to your load test. As Goose "owns" the command line, adding another option with gumdrop (used by Goose) or another command line parser can be tricky, as Goose will throw an error if it receives an unexpected command line option. There are two alternatives here.

Environment Variables

One option is to use environment variables. An example of this can be found in the Umami example which uses environment variables to allow the configuration of a custom username and password.

Alternatively, you can use this method to set configurable custom defaults. The earlier example can be enhanced to use an environment variable to set a custom default hostname:

use goose::prelude::*;

async fn loadtest_index(user: &mut GooseUser) -> TransactionResult {
    let _goose_metrics = user.get("").await?;

    Ok(())
}
#[tokio::main]
async fn main() -> Result<(), GooseError> {
    // Get optional custom default hostname from `HOST` environment variable.
    let custom_host = match std::env::var("HOST") {
        Ok(host) => host,
        Err(_) => "".to_string(),
    };

    GooseAttack::initialize()?
        .register_scenario(scenario!("LoadtestTransactions")
            .register_transaction(transaction!(loadtest_index))
        )
        // Set optional custom default hostname.
        .set_default(GooseDefault::Host, custom_host.as_str())?
        .execute()
        .await?;

    Ok(())
}

This can now be used to set a custom default for the scenario, in this example with no --host set Goose will execute a load test against the hostname defined in HOST:

% HOST="https://local.dev/" cargo run --release                  
    Finished release [optimized] target(s) in 0.07s
     Running `target/release/loadtest`
07:28:20 [INFO] Output verbosity level: INFO
07:28:20 [INFO] Logfile verbosity level: WARN
07:28:20 [INFO] users defaulted to number of CPUs = 10
07:28:20 [INFO] iterations = 0
07:28:20 [INFO] host for LoadtestTransactions configured: https://local.dev/

It's still possible to override this custom default from the command line with standard Goose options, for example here the load test will run against the hostname configured by the --host option:

% HOST="http://local.dev/" cargo run --release -- --host https://example.com/
    Finished release [optimized] target(s) in 0.07s
     Running `target/release/loadtest --host 'https://example.com/'`
07:32:36 [INFO] Output verbosity level: INFO
07:32:36 [INFO] Logfile verbosity level: WARN
07:32:36 [INFO] users defaulted to number of CPUs = 10
07:32:36 [INFO] iterations = 0
07:32:36 [INFO] global host configured: https://example.com/

If the HOST variable and the --host option are not set, Goose will display the expected error:

% cargo run --release
     Running `target/release/loadtest`
07:07:45 [INFO] Output verbosity level: INFO
07:07:45 [INFO] Logfile verbosity level: WARN
07:07:45 [INFO] users defaulted to number of CPUs = 10
07:07:45 [INFO] iterations = 0
Error: InvalidOption { option: "--host", value: "", detail: "A host must be defined via the --host option, the GooseAttack.set_default() function, or the Scenario.set_host() function (no host defined for LoadtestTransactions)." }

Command Line Arguments

If you really need to have custom command line arguments, there is a way to make Goose not throw an error due to unexpected arguments. You can do that by, instead of calling GooseAttack::initialize(), using GooseAttack::initialize_with_config. This method differs from the first one in that it does not parse arguments from the command line, but instead takes a GooseConfiguration value as parameter. Since this type has quite a lot of configuration options, with some private fields, currently the only way you can obtain an instance of it is via the Default trait: GooseConfiguration::default().

Note that by initializing the GooseAttack in this way you are preventing Goose from reading command line arguments, so if you want to have the ability of passing the arguments that Goose allows, you will need to parse them and set them in the GooseConfiguration instance. In particular, the --host parameter is mandatory, so don't forget to set it in the configuration somehow.

The example below should illustrate these points:

use goose::config::GooseConfiguration;

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    // here we could be using a crate such as `clap` to parse CLI arguments:
    let opt = MyCustomConfig::parse();

    let mut config = GooseConfiguration::default();

    // we added a `host` field to our custom argument parser that matches
    // the `host` field used by Goose
    config.host = opt.host;

    // ... here you should do the same for all the other command line parameters
    // offered by Goose that you care about, otherwise they will not be taken
    // into account.

    // Initialize the `GooseAttack` using the `GooseConfiguration`:
    GooseAttack::initialize_with_config(config)?
        .register_scenario(
            scenario!("User")
                .register_transaction(transaction!(loadtest_index))
        )
        .execute()
        .await?;

    Ok(())
}

Assuming that MyCustomConfig has a my_custom_arg field, the program above can be invoked with a command such as:

cargo run -- --host https://localhost:8080 --my-custom-arg 42

Metrics

Here's sample output generated when running a loadtest, in this case the Umami example that comes with Goose.

In this case, the Drupal Umami demo was installed in a local container. The following command was used to configure Goose and run the load test. The -u9 tells Goose to spin up 9 users. The -r3 option tells Goose to hatch 3 users per second. The -t1m option tells Goose to run the load test for 1 minute, or 60 seconds. The --no-reset-metrics flag tells Goose to include all metrics, instead of the default which is to flush all metrics collected during start up. And finally, the --report-file report.html tells Goose to generate an HTML-formatted report named report.html once the load test finishes.

ASCII metrics

% cargo run --release --example umami -- --host http://umami.ddev.site/ -u9 -r3 -t1m --no-reset-metrics --report-file report.html
   Compiling goose v0.18.1 (~/goose)
    Finished release [optimized] target(s) in 11.88s
     Running `target/release/examples/umami --host 'http://umami.ddev.site/' -u9 -r3 -t1m --no-reset-metrics --report-file report.html`
05:09:05 [INFO] Output verbosity level: INFO
05:09:05 [INFO] Logfile verbosity level: WARN
05:09:05 [INFO] users = 9
05:09:05 [INFO] run_time = 60
05:09:05 [INFO] hatch_rate = 3
05:09:05 [INFO] no_reset_metrics = true
05:09:05 [INFO] report_file = report.html
05:09:05 [INFO] iterations = 0
05:09:05 [INFO] global host configured: http://umami.ddev.site/
05:09:05 [INFO] allocating transactions and scenarios with RoundRobin scheduler
05:09:05 [INFO] initializing 9 user states...
05:09:05 [INFO] Telnet controller listening on: 0.0.0.0:5116
05:09:05 [INFO] WebSocket controller listening on: 0.0.0.0:5117
05:09:05 [INFO] entering GooseAttack phase: Increase
05:09:05 [INFO] [user 1]: launching user from Anonymous Spanish user
05:09:05 [INFO] [user 2]: launching user from Anonymous English user
05:09:05 [INFO] [user 3]: launching user from Anonymous Spanish user
05:09:06 [INFO] [user 4]: launching user from Anonymous English user
05:09:06 [INFO] [user 5]: launching user from Anonymous Spanish user
05:09:06 [INFO] [user 6]: launching user from Anonymous English user
05:09:07 [INFO] [user 7]: launching user from Admin user
05:09:07 [INFO] [user 8]: launching user from Anonymous Spanish user
05:09:07 [INFO] [user 9]: launching user from Anonymous English user
All 9 users hatched.

05:09:08 [INFO] entering GooseAttack phase: Maintain
05:10:08 [INFO] entering GooseAttack phase: Decrease
05:10:08 [INFO] [user 2]: exiting user from Anonymous English user
05:10:08 [INFO] [user 3]: exiting user from Anonymous Spanish user
05:10:08 [INFO] [user 6]: exiting user from Anonymous English user
05:10:08 [INFO] [user 8]: exiting user from Anonymous Spanish user
05:10:08 [INFO] [user 4]: exiting user from Anonymous English user
05:10:08 [INFO] [user 7]: exiting user from Admin user
05:10:08 [INFO] [user 1]: exiting user from Anonymous Spanish user
05:10:08 [INFO] [user 9]: exiting user from Anonymous English user
05:10:08 [INFO] [user 5]: exiting user from Anonymous Spanish user
05:10:08 [INFO] wrote html report file to: report.html
05:10:08 [INFO] entering GooseAttack phase: Shutdown
05:10:08 [INFO] printing final metrics after 63 seconds...

 === PER SCENARIO METRICS ===
 ------------------------------------------------------------------------------
 Name                     |  # users |  # times run | scenarios/s | iterations
 ------------------------------------------------------------------------------
 1: Anonymous English u.. |        4 |            8 |        0.13 |       2.00
 2: Anonymous Spanish u.. |        4 |            8 |        0.13 |       2.00
 3: Admin user            |        1 |            1 |        0.02 |       1.00
 -------------------------+----------+--------------+-------------+------------
 Aggregated               |        9 |           17 |        0.27 |       1.89
 ------------------------------------------------------------------------------
 Name                     |    Avg (ms) |        Min |         Max |     Median
 ------------------------------------------------------------------------------
   1: Anonymous English.. |       25251 |     19,488 |      31,308 |     19,488
   2: Anonymous Spanish.. |       24394 |     20,954 |      27,821 |     20,954
   3: Admin user          |       32431 |     32,431 |      32,431 |     32,431
 -------------------------+-------------+------------+-------------+-----------
 Aggregated               |       25270 |     19,488 |      32,431 |     19,488

 === PER TRANSACTION METRICS ===
 ------------------------------------------------------------------------------
 Name                     |   # times run |        # fails |  trans/s |  fail/s
 ------------------------------------------------------------------------------
 1: Anonymous English user
   1: anon /              |            21 |         0 (0%) |     0.33 |    0.00
   2: anon /en/basicpage  |            12 |         0 (0%) |     0.19 |    0.00
   3: anon /en/articles/  |            12 |         0 (0%) |     0.19 |    0.00
   4: anon /en/articles/% |            21 |         0 (0%) |     0.33 |    0.00
   5: anon /en/recipes/   |            12 |         0 (0%) |     0.19 |    0.00
   6: anon /en/recipes/%  |            36 |         0 (0%) |     0.57 |    0.00
   7: anon /node/%nid     |            11 |         0 (0%) |     0.17 |    0.00
   8: anon /en term       |            19 |         0 (0%) |     0.30 |    0.00
   9: anon /en/search     |             9 |         0 (0%) |     0.14 |    0.00
   10: anon /en/contact   |             9 |         0 (0%) |     0.14 |    0.00
 2: Anonymous Spanish user
   1: anon /es/           |            22 |         0 (0%) |     0.35 |    0.00
   2: anon /es/basicpage  |            12 |         0 (0%) |     0.19 |    0.00
   3: anon /es/articles/  |            12 |         0 (0%) |     0.19 |    0.00
   4: anon /es/articles/% |            21 |         0 (0%) |     0.33 |    0.00
   5: anon /es/recipes/   |            12 |         0 (0%) |     0.19 |    0.00
   6: anon /es/recipes/%  |            37 |         0 (0%) |     0.59 |    0.00
   7: anon /es term       |            21 |         0 (0%) |     0.33 |    0.00
   8: anon /es/search     |            12 |         0 (0%) |     0.19 |    0.00
   9: anon /es/contact    |            10 |         0 (0%) |     0.16 |    0.00
 3: Admin user           
   1: auth /en/user/login |             1 |         0 (0%) |     0.02 |    0.00
   2: auth /              |             4 |         0 (0%) |     0.06 |    0.00
   3: auth /en/articles/  |             2 |         0 (0%) |     0.03 |    0.00
   4: auth /en/node/%/e.. |             3 |         0 (0%) |     0.05 |    0.00
 -------------------------+---------------+----------------+----------+--------
 Aggregated               |           331 |         0 (0%) |     5.25 |    0.00
 ------------------------------------------------------------------------------
 Name                     |    Avg (ms) |        Min |         Max |     Median
 ------------------------------------------------------------------------------
 1: Anonymous English user
   1: anon /              |      123.48 |         85 |         224 |        110
   2: anon /en/basicpage  |       56.08 |         44 |          75 |         50
   3: anon /en/articles/  |      147.58 |         91 |         214 |        140
   4: anon /en/articles/% |      148.14 |         72 |         257 |        160
   5: anon /en/recipes/   |      170.58 |        109 |         242 |        150
   6: anon /en/recipes/%  |       66.08 |         48 |         131 |         60
   7: anon /node/%nid     |       94.09 |         46 |         186 |         70
   8: anon /en term       |      134.37 |         52 |         194 |        130
   9: anon /en/search     |      282.33 |        190 |         339 |        270
   10: anon /en/contact   |      246.89 |        186 |         346 |        260
 2: Anonymous Spanish user
   1: anon /es/           |      141.36 |         88 |         285 |        130
   2: anon /es/basicpage  |       61.17 |         43 |          92 |         51
   3: anon /es/articles/  |      130.58 |         87 |         187 |        110
   4: anon /es/articles/% |      164.52 |         85 |         263 |        170
   5: anon /es/recipes/   |      161.25 |        108 |         274 |        120
   6: anon /es/recipes/%  |       65.24 |         47 |         107 |         61
   7: anon /es term       |      145.14 |         49 |         199 |        150
   8: anon /es/search     |      276.33 |        206 |         361 |        270
   9: anon /es/contact    |      240.20 |        204 |         297 |        230
 3: Admin user           
   1: auth /en/user/login |      262.00 |        262 |         262 |        262
   2: auth /              |      260.75 |        238 |         287 |        250
   3: auth /en/articles/  |      232.00 |        220 |         244 |        220
   4: auth /en/node/%/e.. |      745.67 |        725 |         771 |        725
 -------------------------+-------------+------------+-------------+-----------
 Aggregated               |      141.73 |         43 |         771 |        120

 === PER REQUEST METRICS ===
 ------------------------------------------------------------------------------
 Name                     |        # reqs |        # fails |    req/s |  fail/s
 ------------------------------------------------------------------------------
 GET anon /               |            21 |         0 (0%) |     0.33 |    0.00
 GET anon /en term        |            19 |         0 (0%) |     0.30 |    0.00
 GET anon /en/articles/   |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /en/articles/%  |            21 |         0 (0%) |     0.33 |    0.00
 GET anon /en/basicpage   |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /en/contact     |             9 |         0 (0%) |     0.14 |    0.00
 GET anon /en/recipes/    |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /en/recipes/%   |            36 |         0 (0%) |     0.57 |    0.00
 GET anon /en/search      |             9 |         0 (0%) |     0.14 |    0.00
 GET anon /es term        |            21 |         0 (0%) |     0.33 |    0.00
 GET anon /es/            |            22 |         0 (0%) |     0.35 |    0.00
 GET anon /es/articles/   |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /es/articles/%  |            21 |         0 (0%) |     0.33 |    0.00
 GET anon /es/basicpage   |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /es/contact     |            10 |         0 (0%) |     0.16 |    0.00
 GET anon /es/recipes/    |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /es/recipes/%   |            37 |         0 (0%) |     0.59 |    0.00
 GET anon /es/search      |            12 |         0 (0%) |     0.19 |    0.00
 GET anon /node/%nid      |            11 |         0 (0%) |     0.17 |    0.00
 GET auth /               |             4 |         0 (0%) |     0.06 |    0.00
 GET auth /en/articles/   |             2 |         0 (0%) |     0.03 |    0.00
 GET auth /en/node/%/edit |             6 |         0 (0%) |     0.10 |    0.00
 GET auth /en/user/login  |             1 |         0 (0%) |     0.02 |    0.00
 GET static asset         |         3,516 |         0 (0%) |    55.81 |    0.00
 POST anon /en/contact    |             9 |         0 (0%) |     0.14 |    0.00
 POST anon /en/search     |             9 |         0 (0%) |     0.14 |    0.00
 POST anon /es/contact    |            10 |         0 (0%) |     0.16 |    0.00
 POST anon /es/search     |            12 |         0 (0%) |     0.19 |    0.00
 POST auth /en/node/%/e.. |             3 |         0 (0%) |     0.05 |    0.00
 POST auth /en/user/login |             1 |         0 (0%) |     0.02 |    0.00
 -------------------------+---------------+----------------+----------+--------
 Aggregated               |         3,894 |         0 (0%) |    61.81 |    0.00
 ------------------------------------------------------------------------------
 Name                     |    Avg (ms) |        Min |         Max |     Median
 ------------------------------------------------------------------------------
 GET anon /               |       38.95 |         14 |         132 |         24
 GET anon /en term        |       95.63 |         22 |         159 |         98
 GET anon /en/articles/   |       61.67 |         16 |         139 |         42
 GET anon /en/articles/%  |       94.86 |         20 |         180 |        100
 GET anon /en/basicpage   |       25.67 |         17 |          40 |         24
 GET anon /en/contact     |       34.67 |         16 |          61 |         30
 GET anon /en/recipes/    |       59.83 |         17 |         130 |         45
 GET anon /en/recipes/%   |       27.86 |         16 |          56 |         22
 GET anon /en/search      |       54.33 |         20 |         101 |         30
 GET anon /es term        |      106.14 |         19 |         159 |        110
 GET anon /es/            |       51.41 |         18 |         179 |         29
 GET anon /es/articles/   |       53.42 |         17 |         110 |         27
 GET anon /es/articles/%  |      105.52 |         20 |         203 |        110
 GET anon /es/basicpage   |       27.25 |         18 |          55 |         22
 GET anon /es/contact     |       27.80 |         17 |          49 |         24
 GET anon /es/recipes/    |       59.08 |         18 |         165 |         26
 GET anon /es/recipes/%   |       28.65 |         16 |          61 |         26
 GET anon /es/search      |       46.42 |         17 |          99 |         25
 GET anon /node/%nid      |       52.73 |         17 |         133 |         38
 GET auth /               |      140.75 |        109 |         169 |        120
 GET auth /en/articles/   |      103.50 |         89 |         118 |         89
 GET auth /en/node/%/edit |      114.83 |         91 |         136 |        120
 GET auth /en/user/login  |       24.00 |         24 |          24 |         24
 GET static asset         |        5.11 |          2 |          38 |          5
 POST anon /en/contact    |      136.67 |         99 |         204 |        140
 POST anon /en/search     |      162.11 |        114 |         209 |        170
 POST anon /es/contact    |      137.70 |        111 |         174 |        130
 POST anon /es/search     |      164.08 |        118 |         235 |        140
 POST auth /en/node/%/e.. |      292.33 |        280 |         304 |        290
 POST auth /en/user/login |      143.00 |        143 |         143 |        143
 -------------------------+-------------+------------+-------------+-----------
 Aggregated               |       11.41 |          2 |         304 |          5
 ------------------------------------------------------------------------------
 Slowest page load within specified percentile of requests (in ms):
 ------------------------------------------------------------------------------
 Name                     |    50% |    75% |    98% |    99% |  99.9% | 99.99%
 ------------------------------------------------------------------------------
 GET anon /               |     24 |     29 |    130 |    130 |    130 |    130
 GET anon /en term        |     98 |    110 |    159 |    159 |    159 |    159
 GET anon /en/articles/   |     42 |     92 |    139 |    139 |    139 |    139
 GET anon /en/articles/%  |    100 |    120 |    180 |    180 |    180 |    180
 GET anon /en/basicpage   |     24 |     30 |     40 |     40 |     40 |     40
 GET anon /en/contact     |     30 |     46 |     61 |     61 |     61 |     61
 GET anon /en/recipes/    |     45 |     88 |    130 |    130 |    130 |    130
 GET anon /en/recipes/%   |     22 |     31 |     55 |     56 |     56 |     56
 GET anon /en/search      |     30 |     89 |    100 |    100 |    100 |    100
 GET anon /es term        |    110 |    130 |    159 |    159 |    159 |    159
 GET anon /es/            |     29 |     57 |    179 |    179 |    179 |    179
 GET anon /es/articles/   |     27 |     96 |    110 |    110 |    110 |    110
 GET anon /es/articles/%  |    110 |    140 |    200 |    200 |    200 |    200
 GET anon /es/basicpage   |     22 |     27 |     55 |     55 |     55 |     55
 GET anon /es/contact     |     24 |     35 |     49 |     49 |     49 |     49
 GET anon /es/recipes/    |     26 |    110 |    165 |    165 |    165 |    165
 GET anon /es/recipes/%   |     26 |     34 |     57 |     61 |     61 |     61
 GET anon /es/search      |     25 |     78 |     99 |     99 |     99 |     99
 GET anon /node/%nid      |     38 |     41 |    130 |    130 |    130 |    130
 GET auth /               |    120 |    160 |    169 |    169 |    169 |    169
 GET auth /en/articles/   |     89 |    118 |    118 |    118 |    118 |    118
 GET auth /en/node/%/edit |    120 |    130 |    136 |    136 |    136 |    136
 GET auth /en/user/login  |     24 |     24 |     24 |     24 |     24 |     24
 GET static asset         |      5 |      6 |     10 |     13 |     29 |     38
 POST anon /en/contact    |    140 |    150 |    200 |    200 |    200 |    200
 POST anon /en/search     |    170 |    180 |    209 |    209 |    209 |    209
 POST anon /es/contact    |    130 |    150 |    170 |    170 |    170 |    170
 POST anon /es/search     |    140 |    180 |    235 |    235 |    235 |    235
 POST auth /en/node/%/e.. |    290 |    290 |    300 |    300 |    300 |    300
 POST auth /en/user/login |    143 |    143 |    143 |    143 |    143 |    143
 -------------------------+--------+--------+--------+--------+--------+-------
 Aggregated               |      5 |      7 |    120 |    140 |    240 |    300
 ------------------------------------------------------------------------------
 Name                     |                                        Status codes 
 ------------------------------------------------------------------------------
 GET anon /               |                                            21 [200]
 GET anon /en term        |                                            19 [200]
 GET anon /en/articles/   |                                            12 [200]
 GET anon /en/articles/%  |                                            21 [200]
 GET anon /en/basicpage   |                                            12 [200]
 GET anon /en/contact     |                                             9 [200]
 GET anon /en/recipes/    |                                            12 [200]
 GET anon /en/recipes/%   |                                            36 [200]
 GET anon /en/search      |                                             9 [200]
 GET anon /es term        |                                            21 [200]
 GET anon /es/            |                                            22 [200]
 GET anon /es/articles/   |                                            12 [200]
 GET anon /es/articles/%  |                                            21 [200]
 GET anon /es/basicpage   |                                            12 [200]
 GET anon /es/contact     |                                            10 [200]
 GET anon /es/recipes/    |                                            12 [200]
 GET anon /es/recipes/%   |                                            37 [200]
 GET anon /es/search      |                                            12 [200]
 GET anon /node/%nid      |                                            11 [200]
 GET auth /               |                                             4 [200]
 GET auth /en/articles/   |                                             2 [200]
 GET auth /en/node/%/edit |                                             6 [200]
 GET auth /en/user/login  |                                             1 [200]
 GET static asset         |                                         3,516 [200]
 POST anon /en/contact    |                                             9 [200]
 POST anon /en/search     |                                             9 [200]
 POST anon /es/contact    |                                            10 [200]
 POST anon /es/search     |                                            12 [200]
 POST auth /en/node/%/e.. |                                             3 [200]
 POST auth /en/user/login |                                             1 [200]
 -------------------------+----------------------------------------------------
 Aggregated               |                                         3,894 [200] 

 === OVERVIEW ===
 ------------------------------------------------------------------------------
 Action       Started               Stopped             Elapsed    Users
 ------------------------------------------------------------------------------
 Increasing:  2022-05-17 07:09:05 - 2022-05-17 07:09:08 (00:00:03, 0 -> 9)
 Maintaining: 2022-05-17 07:09:08 - 2022-05-17 07:10:08 (00:01:00, 9)
 Decreasing:  2022-05-17 07:10:08 - 2022-05-17 07:10:08 (00:00:00, 0 <- 9)

 Target host: http://umami.ddev.site/
 goose v0.18.1
 ------------------------------------------------------------------------------

Metrics reports

In addition to the above metrics displayed on the CLI, we've also told Goose to create reports on other formats, like Markdown, JSON, or HTML.

It is possible to create one or more reports at the same time, using one or more --report-file arguments. The type of report is chosen by the file extension. An unsupported file extension will lead to an error.

The following subsections describe the reports on more detail.

HTML report

Overview

The HTML report starts with a brief overview table, offering the same information found in the ASCII overview above: Metrics overview

NOTE: The HTML report includes some graphs that rely on the eCharts JavaScript library. The HTML report loads the library via CDN, which means that the graphs won't be loaded correctly if the CDN is not accessible.

Requests

Next the report includes a graph of all requests made during the duration of the load test. By default, the graph includes an aggregated average, as well as per-request details. It's possible to click on the request names at the top of the graph to hide/show specific requests on the graphs. In this case, the graph shows that most requests made by the load test were for static assets.

Below the graph is a table that shows per-request details, only partially included in this screenshot: Request metrics

Response times

The next graph shows the response times measured for each request made. In the following graph, it's apparent that POST requests had the slowest responses, which is logical as they are not cached. As before, it's possible to click on the request names at the top of the graph to hide/show details about specific requests.

Below the graph is a table that shows per-request details: Response time metrics

Status codes

All status codes returned by the server are displayed in a table, per-request and in aggregate. In our simple test, we received only 200 OK responses. Status code metrics

Transactions

The next graph summarizes all Transactions run during the load test. One or more requests are grouped logically inside Transactions. For example, the Transaction named 0.0 anon / includes an anonymous (not-logged-in) request for the front page, as well as requests for all static assets found on the front page.

Whereas a Request automatically fails based on the web server response code, the code that defines a Transaction must manually return an error for a Task to be considered failed. For example, the logic may be written to fail the Transaction of the html request fails, but not if one or more static asset requests fail.

This graph is also followed by a table showing details on all Transactions, partially shown here: Transaction metrics

Scenarios

The next graph summarizes all Scenarios run during the load test. One or more Transactions are grouped logically inside Scenarios.

For example, the Scenario named Anonymous English user includes the above anon / Transaction, the anon /en/basicpage, and all the rest of the Transactions requesting pages in English.

It is followed by a table, shown in entirety here because this load test only has 3 Scenarios. The # Users column indicates how many GooseUser threads were assigned to run this Scenario during the load test. The # Times Run column indicates how many times in aggregate all GooseUser threads ran completely through the Scenario. From there you can see how long on average it took a GooseUser thread to run through all Transactions and make all contained Requests to completely run the Scenario, as well as the minimum and maximum amount of time. Finally, Iterations is how many times each assigned GooseUser thread ran through the entire Scenario (Iterations times the # of Users will always equal the total # of times run).

As our example only ran for 60 seconds, and the Admin user Scenario took >30 seconds to run once, the load test only ran completely through this scenario one time, also reflected in the following table: Scenario metrics

Users

The final graph shows how many users were running at the various stages of the load test. As configured, Goose quickly ramped up to 9 users, then sustained that level of traffic for a minute before shutting down: User metrics

Markdown report

The Markdown report follows the structure of the HTML report. However, it does not include the chart elements.

JSON report

The JSON report is a dump of the internal metrics collection. It is a JSON serialization of the ReportData structure. Mainly having a field named raw_metrics, carrying the content of GooseMetrics.

Developer documentation

Additional details about how metrics are collected, stored, and displayed can be found in the developer documentation.

Tips

Best Practices

  • When writing load tests, avoid unwrap() (and variations) in your transaction functions -- Goose generates a lot of load, and this tends to trigger errors. Embrace Rust's warnings and properly handle all possible errors, this will save you time debugging later.
  • When running your load test, use the cargo --release flag to generate optimized code. This can generate considerably more load test traffic. Learn more about this and other optimizations in "The golden Goose egg, a compile-time adventure".

Errors

Timeouts

By default, Goose will time out requests that take longer than 60 seconds to return, and display a WARN level message saying, "operation timed out". For example:

11:52:17 [WARN] "/node/3672": error sending request for url (http://apache/node/3672): operation timed out

These will also show up in the error summary displayed with the final metrics. For example:

 === ERRORS ===
 ------------------------------------------------------------------------------
 Count       | Error
 ------------------------------------------------------------------------------
 51            GET (Auth) comment form: error sending request (Auth) comment form: operation timed out

To change how long before requests time out, use --timeout VALUE when starting a load test, for example --timeout 30 will time out requests that take longer than 30 seconds to return. To configure the timeout programatically, use .set_default() to set GooseDefault::Timeout.

To completely disable timeouts, you must build a custom Reqwest Client with GooseUser::set_client_builder. Alternatively, you can just set a very high timeout, for example --timeout 86400 will let a request take up to 24 hours.

Debugging HTML Responses

Sometimes, while developing and debugging a load test we'd like to view HTML responses in a browser to actually see where each request is actually taking us. We may want to run this test with one user to avoid debug noise.

We can create a debug log by passing the --debug-log NAME command line option.

Each row in the debug log defaults to a JSON object and we can use jq for processing JSON or the faster Rust port that supports the same commands jaq

To extract the HTML response from the first log entry, for example, you could use the following commands:

cat debug.log | head -n 1 | jaq -r .body > page.html

This HTML page can then be viewed in a web browser. You may need to disable JavaScript.

Killswitch

Goose provides a killswitch mechanism to programmatically stop a load test when specific conditions are met. This is useful for protecting your systems and ensuring tests stop automatically when problems are detected.

Common Use Cases

  • Error Rate Threshold: Stop when error rate exceeds acceptable limits
  • Response Time SLA Monitoring: Halt testing when latency violates requirements
  • Health Check Integration: Monitor system health endpoints and stop on failure
  • Resource Exhaustion Detection: Stop when detecting connection pool or memory issues
  • Sitemap Traversal Completion: Stop after fully crawling a site's pages
  • Data Set Processing: Stop when all test data has been consumed
  • External Signal Integration: Stop based on monitoring system alerts

Example: Service Unavailable Detection

#![allow(unused)]
fn main() {
use goose::prelude::*;

async fn check_availability(user: &mut GooseUser) -> TransactionResult {
    let mut response = user.get("/api/endpoint").await?;
    
    // Stop the test if server returns 503 Service Unavailable
    if let Ok(response) = response.response {
        if response.status() == 503 {
            goose::trigger_killswitch("Server returned 503: Service Unavailable");
        }
    }
    
    Ok(())
}
}

You can also check if the killswitch has been triggered using goose::is_killswitch_triggered() to conditionally execute cleanup code or skip certain operations.

Logging

With logging, it's possible to record all Goose activity. This can be useful for debugging errors, for validating the load test, and for creating graphs.

When logging is enabled, a central logging thread maintains a buffer to minimize the IO overhead, and controls the writing to ensure that multiple threads don't corrupt each other's messages. All log messages are sent through a channel to the logging thread and written asynchronously, minimizing the impact on the load test.

Request Log

Goose can optionally log details about all the requests made during the load test to a file. This log file contains the running metrics Goose generates as the load test runs. To enable, add the --request-log <request.log> command line option, where <request.log> is either a relative or absolute path of the log file to create. Any existing file that may already exist will be overwritten.

If --request-body is also enabled, the request log will include the entire body of any client requests.

Logs include the entire GooseRequestMetric object which also includes the entire GooseRawRequest object, both created for all client requests.

Log Format

By default, logs are written in JSON Lines format. For example (in this case with --request-body also enabled):

{"coordinated_omission_elapsed":0,"elapsed":13219,"error":"","final_url":"http://apache/misc/jquery-extend-3.4.0.js?v=1.4.4","name":"static asset","raw":{"body":"","headers":[],"method":"Get","url":"http://apache/misc/jquery-extend-3.4.0.js?v=1.4.4"},"redirected":false,"response_time":7,"status_code":200,"success":true,"update":false,"user":4,"user_cadence":0}
{"coordinated_omission_elapsed":0,"elapsed":13055,"error":"","final_url":"http://apache/node/1786#comment-114852","name":"(Auth) comment form","raw":{"body":"subject=this+is+a+test+comment+subject&comment_body%5Bund%5D%5B0%5D%5Bvalue%5D=this+is+a+test+comment+body&comment_body%5Bund%5D%5B0%5D%5Bformat%5D=filtered_html&form_build_id=form-U0L3wm2SsIKAhVhaHpxeL1TLUHW64DXKifmQeZsUsss&form_token=VKDel_jiYzjqPrekL1FrP2_4EqHTlsaqLjMUJ6pn-sE&form_id=comment_node_article_form&op=Save","headers":["(\"content-type\", \"application/x-www-form-urlencoded\")"],"method":"Post","url":"http://apache/comment/reply/1786"},"redirected":true,"response_time":172,"status_code":200,"success":true,"update":false,"user":1,"user_cadence":0}
{"coordinated_omission_elapsed":0,"elapsed":13219,"error":"","final_url":"http://apache/misc/drupal.js?q9apdy","name":"static asset","raw":{"body":"","headers":[],"method":"Get","url":"http://apache/misc/drupal.js?q9apdy"},"redirected":false,"response_time":7,"status_code":200,"success":true,"update":false,"user":0,"user_cadence":0}

The --request-format option can be used to log in csv, json (default), raw or pretty format. The raw format is Rust's debug output of the entire GooseRequestMetric object.

Gaggle Mode

When operating in Gaggle-mode, the --request-log option can only be enabled on the Worker processes, configuring Goose to spread out the overhead of writing logs.

Transaction Log

Goose can optionally log details about each time a transaction is run during a load test. To enable, add the --transaction-log <transaction.log> command line option, where <transaction.log> is either a relative or absolute path of the log file to create. Any existing file that may already exist will be overwritten.

Logs include the entire TransactionMetric object which is created each time any transaction is run.

Log Format

By default, logs are written in JSON Lines format. For example:

{"elapsed":22060,"name":"(Anon) front page","run_time":97,"success":true,"transaction_index":0,"scenario_index":0,"user":0}
{"elapsed":22118,"name":"(Anon) node page","run_time":41,"success":true,"transaction_index":1,"scenario_index":0,"user":5}
{"elapsed":22157,"name":"(Anon) node page","run_time":6,"success":true,"transaction_index":1,"scenario_index":0,"user":0}
{"elapsed":22078,"name":"(Auth) front page","run_time":109,"success":true,"transaction_index":1,"scenario_index":1,"user":6}
{"elapsed":22157,"name":"(Anon) user page","run_time":35,"success":true,"transaction_index":2,"scenario_index":0,"user":4}

In the first line of the above example, GooseUser thread 0 succesfully ran the (Anon) front page transaction in 97 milliseconds. In the second line GooseUser thread 5 succesfully ran the (Anon) node page transaction in 41 milliseconds.

The --transaction-format option can be used to log in csv, json (default), raw or pretty format. The raw format is Rust's debug output of the entire TransactionMetric object.

For example, csv output of similar transactions as those logged above would like like:

elapsed,scenario_index,transaction_index,name,run_time,success,user
21936,0,0,"(Anon) front page",83,true,0
21990,1,3,"(Auth) user page",34,true,1
21954,0,0,"(Anon) front page",84,true,5
22009,0,1,"(Anon) node page",34,true,2
21952,0,0,"(Anon) front page",95,true,7

Gaggle Mode

When operating in Gaggle-mode, the --transaction-log option can only be enabled on the Worker processes, configuring Goose to spread out the overhead of writing logs.

Scneario Log

Goose can optionally log details about each time a scenario is run during a load test. To enable, add the --scenario-log <scenario.log> command line option, where <scenario.log> is either a relative or absolute path of the log file to create. Any existing file that may already exist will be overwritten.

Logs include the entire ScenarioMetric object which is created each time any scenario is run.

Log Format

By default, logs are written in JSON Lines format. For example:

{"elapsed":15751,"index":0,"name":"AnonBrowsingUser","run_time":1287,"user":7}
{"elapsed":15756,"index":0,"name":"AnonBrowsingUser","run_time":1308,"user":4}
{"elapsed":15760,"index":0,"name":"AnonBrowsingUser","run_time":1286,"user":9}
{"elapsed":15783,"index":0,"name":"AnonBrowsingUser","run_time":1301,"user":0}
{"elapsed":22802,"index":1,"name":"AuthBrowsingUser","run_time":13056,"user":8}

In the first line of the above example, GooseUser thread 7 ran the complete AnonBrowsingUser scenario in 1,287 milliseconds. In the fifth line GooseUser thread 8 succesfully ran the AuthBrowsingUser transaction in 13,056 milliseconds.

The --scenario-format option can be used to log in csv, json (default), raw or pretty format. The raw format is Rust's debug output of the entire ScenarioMetric object.

For example, csv output of similar transactions as those logged above would like like:

elapsed,scenario_index,transaction_index,name,run_time,success,user
15751,AnonBrowsingUser,0,1287,7
15756,AnonBrowsingUser,0,1308,4
15760,AnonBrowsingUser,0,1286,9
15783,AnonBrowsingUser,0,1301,0
22802,AuthBrowsingUser,1,13056,8

Gaggle Mode

When operating in Gaggle-mode, the --scenario-log option can only be enabled on the Worker processes, configuring Goose to spread out the overhead of writing logs.

Error Log

Goose can optionally log details about all load test errors to a file. To enable, add the --error-log=<error.log> command line option, where <error.log> is either a relative or absolute path of the log file to create. Any existing file that may already exist will be overwritten.

Logs include the entire GooseErrorMetric object, created any time a request results in an error.

Log Format

By default, logs are written in JSON Lines format. For example:

{"elapsed":9318,"error":"503 Service Unavailable: /","final_url":"http://apache/","name":"(Auth) front page","raw":{"body":"","headers":[],"method":"Get","url":"http://apache/"},"redirected":false,"response_time":6,"status_code":503,"user":1}
{"elapsed":9318,"error":"503 Service Unavailable: /node/8211","final_url":"http://apache/node/8211","name":"(Anon) node page","raw":{"body":"","headers":[],"method":"Get","url":"http://apache/node/8211"},"redirected":false,"response_time":6,"status_code":503,"user":3}

The --errors-format option can be used to change the log format to csv, json (default), raw or pretty format. The raw format is Rust's debug output of the entire GooseErrorMetric object.

Gaggle Mode

When operating in Gaggle-mode, the --error-log option can only be enabled on the Worker processes, configuring Goose to spread out the overhead of writing logs.

Debug Log

Goose can optionally and efficiently log arbitrary details, and specifics about requests and responses for debug purposes.

To enable, add the --debug-log <debug.log> command line option, where <debug.log> is either a relative or absolute path of the log file to create. Any existing file that may already exist will be overwritten.

If --debug-log <foo> is not specified at run time, nothing will be logged and there is no measurable overhead in your load test.

To write to the debug log, you must invoke log_debug from your load test transaction functions. The tag parameter allows you to record any arbitrary string: it can also identify where in the load test the log was generated, and/or why debug is being written, and/or other details such as the contents of a form the load test posts. Other paramters that can be included in the debug log are the complete Request that was made, as well as the Headers and Body of the Response.

(Known limitations in Reqwest prevent all headers from being recorded: https://github.com/tag1consulting/goose/issues/336)

See examples/drupal_loadtest for an example of how you might invoke log_debug from a load test.

Request Failures

Calls to set_failure can be used to tell Goose that a request failed even though the server returned a successful status code, and will automatically invoke log_debug for you. See examples/drupal_loadtest and examples/umami for an example of how you might use set_failure to generate useful debug logs.

Log Format

By default, logs are written in JSON Lines format. For example:

{"body":"<!DOCTYPE html>\n<html>\n  <head>\n    <title>503 Backend fetch failed</title>\n  </head>\n  <body>\n    <h1>Error 503 Backend fetch failed</h1>\n    <p>Backend fetch failed</p>\n    <h3>Guru Meditation:</h3>\n    <p>XID: 1506620</p>\n    <hr>\n    <p>Varnish cache server</p>\n  </body>\n</html>\n","header":"{\"date\": \"Mon, 19 Jul 2021 09:21:58 GMT\", \"server\": \"Varnish\", \"content-type\": \"text/html; charset=utf-8\", \"retry-after\": \"5\", \"x-varnish\": \"1506619\", \"age\": \"0\", \"via\": \"1.1 varnish (Varnish/6.1)\", \"x-varnish-cache\": \"MISS\", \"x-varnish-cookie\": \"SESSd7e04cba6a8ba148c966860632ef3636=Z50aRHuIzSE5a54pOi-dK_wbxYMhsMwrG0s2WM2TS20\", \"content-length\": \"284\", \"connection\": \"keep-alive\"}","request":{"coordinated_omission_elapsed":0,"elapsed":9162,"error":"503 Service Unavailable: /node/1439","final_url":"http://apache/node/1439","name":"(Auth) comment form","raw":{"body":"","headers":[],"method":"Get","url":"http://apache/node/1439"},"redirected":false,"response_time":5,"status_code":503,"success":false,"update":false,"user":1,"user_cadence":0},"tag":"post_comment: no form_build_id found on node/1439"}

The --debug-format option can be used to log in csv, json (default), raw or pretty format. The raw format is Rust's debug output of the entire GooseDebug object.

Gaggle Mode

When operating in Gaggle-mode, the --debug-log option can only be enabled on the Worker processes, configuring Goose to spread out the overhead of writing logs.

Controlling A Running Goose Load Test

By default, Goose will launch a telnet Controller thread that listens on 0.0.0.0:5116, and a WebSocket Controller thread that listens on 0.0.0.0:5117. The running Goose load test can be controlled through these Controllers. Goose can optionally be started with the --no-autostart run time option to prevent the load test from automatically starting, requiring instead that it be started with a Controller command. When Goose is started this way, a host is not required and can instead be configured via the Controller.

NOTE: The controller currently is not Gaggle-aware, and only functions correctly when running Goose as a single process in standalone mode.

Telnet Controller

The host and port that the telnet Controller listens on can be configured at start time with --telnet-host and --telnet-port. The telnet Controller can be completely disabled with the --no-telnet command line option. The defaults can be changed with GooseDefault::TelnetHost,GooseDefault::TelnetPort, and GooseDefault::NoTelnet.

Controller Commands

To learn about all available commands, telnet into the Controller thread and enter help (or ?). For example:

% telnet localhost 5116
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
goose> ?
goose 0.17.2 controller commands:
help               this help
exit               exit controller

start              start an idle load test
stop               stop a running load test and return to idle state
shutdown           shutdown load test and exit controller

host HOST          set host to load test, (ie https://web.site/)
hatchrate FLOAT    set per-second rate users hatch
startup-time TIME  set total time to take starting users
users INT          set number of simulated users
runtime TIME       set how long to run test, (ie 1h30m5s)
test-plan PLAN     define or replace test-plan, (ie 10,5m;10,1h;0,30s)

config             display load test configuration
config-json        display load test configuration in json format
metrics            display metrics for current load test
metrics-json       display metrics for current load test in json format
goose> q
goodbye!
goose> Connection closed by foreign host.

Example

One possible use-case for the controller is to dynamically reconfigure the number of users being simulated by the load test. In the following example, the load test was launched with the following parameters:

% cargo run --release --example umami -- --no-autostart --host https://umami.ddev.site/ --hatch-rate 50 --report-file report.html

Then the telnet controller is invoked as follows:

% telnet loadtest 5116
Trying loadtest...
Connected to loadtest.
Escape character is '^]'.
goose> start 
load test started
goose> users 20
users configured
goose> users 40
users configured
goose> users 80
users configured
goose> users 40 
users configured
goose> users 20
users configured
goose> users 160
users configured
goose> users 20
users configured
goose> hatch_rate 5
hatch_rate configured
goose> users 80
users configured
goose> users 20
users configured
goose> shutdown
load test shut down
goose> Connection closed by foreign host.

Initially the load test is configured with a hatch rate of 50, so goose increases and decreases the user count by 50 user threads per second. Later we reconfigure the hatch rate to 5, slowing down the rate that goose alters the number of user threads. The result is more clearly illustrated in the following graph generated at the end of the above example load test:

Controller dynamic users and hatch rate

The above commands are also summarized in the metrics overview:

 === OVERVIEW ===
 ------------------------------------------------------------------------------
 Action       Started               Stopped             Elapsed    Users
 ------------------------------------------------------------------------------
 Increasing:  2022-05-05 07:09:34 - 2022-05-05 07:09:34 (00:00:00, 0 -> 10)
 Maintaining: 2022-05-05 07:09:34 - 2022-05-05 07:09:40 (00:00:06, 10)
 Increasing:  2022-05-05 07:09:40 - 2022-05-05 07:09:40 (00:00:00, 10 -> 20)
 Maintaining: 2022-05-05 07:09:40 - 2022-05-05 07:09:46 (00:00:06, 20)
 Increasing:  2022-05-05 07:09:46 - 2022-05-05 07:09:47 (00:00:01, 20 -> 40)
 Maintaining: 2022-05-05 07:09:47 - 2022-05-05 07:09:50 (00:00:03, 40)
 Increasing:  2022-05-05 07:09:50 - 2022-05-05 07:09:51 (00:00:01, 40 -> 80)
 Maintaining: 2022-05-05 07:09:51 - 2022-05-05 07:09:59 (00:00:08, 80)
 Decreasing:  2022-05-05 07:09:59 - 2022-05-05 07:10:00 (00:00:01, 40 <- 80)
 Maintaining: 2022-05-05 07:10:00 - 2022-05-05 07:10:05 (00:00:05, 40)
 Decreasing:  2022-05-05 07:10:05 - 2022-05-05 07:10:06 (00:00:01, 20 <- 40)
 Maintaining: 2022-05-05 07:10:06 - 2022-05-05 07:10:12 (00:00:06, 20)
 Increasing:  2022-05-05 07:10:12 - 2022-05-05 07:10:15 (00:00:03, 20 -> 160)
 Maintaining: 2022-05-05 07:10:15 - 2022-05-05 07:10:19 (00:00:04, 160)
 Decreasing:  2022-05-05 07:10:19 - 2022-05-05 07:10:22 (00:00:03, 20 <- 160)
 Maintaining: 2022-05-05 07:10:22 - 2022-05-05 07:10:35 (00:00:13, 20)
 Increasing:  2022-05-05 07:10:35 - 2022-05-05 07:10:50 (00:00:15, 20 -> 80)
 Maintaining: 2022-05-05 07:10:50 - 2022-05-05 07:10:54 (00:00:04, 80)
 Decreasing:  2022-05-05 07:10:54 - 2022-05-05 07:11:07 (00:00:13, 20 <- 80)
 Maintaining: 2022-05-05 07:11:07 - 2022-05-05 07:11:13 (00:00:06, 20)
 Canceling:   2022-05-05 07:11:13 - 2022-05-05 07:11:13 (00:00:00, 0 <- 20)

 Target host: https://umami.ddev.site/
 goose v0.18.1
 ------------------------------------------------------------------------------

WebSocket Controller

The host and port that the WebSocket Controller listens on can be configured at start time with --websocket-host and --websocket-port. The WebSocket Controller can be completely disabled with the --no-websocket command line option. The defaults can be changed with GooseDefault::WebSocketHost,GooseDefault::WebSocketPort, and GooseDefault::NoWebSocket.

Details

The WebSocket Controller supports the same commands listed in the telnet controller. Requests and Responses are in JSON format.

Requests must be made in the following format:

{"request":String}

For example, a client should send the follow json to request the current load test metrics:

{"request":"metrics"}

Responses will always be in the following format:

{"response":String,"success":Boolean}

For example:

% websocat ws://127.0.0.1:5117
foo
{"response":"invalid json, see Goose book https://book.goose.rs/controller/websocket.html","success":false}
{"request":"foo"}
{"response":"unrecognized command, see Goose book https://book.goose.rs/controller/websocket.html","success":false}
{"request":"config"}
{"response":"{\"help\":false,\"version\":false,\"list\":false,\"host\":\"\",\"users\":10,\"hatch_rate\":null,\"startup_time\":\"0\",\"run_time\":\"0\",\"goose_log\":\"\",\"log_level\":0,\"quiet\":0,\"verbose\":0,\"running_metrics\":null,\"no_reset_metrics\":false,\"no_metrics\":false,\"no_transaction_metrics\":false,\"no_print_metrics\":false,\"no_error_summary\":false,\"report_file\":\"\",\"no_granular_report\":false,\"request_log\":\"\",\"request_format\":\"Json\",\"request_body\":false,\"transaction_log\":\"\",\"transaction_format\":\"Json\",\"error_log\":\"\",\"error_format\":\"Json\",\"debug_log\":\"\",\"debug_format\":\"Json\",\"no_debug_body\":false,\"no_status_codes\":false,\"test_plan\":null,\"no_telnet\":false,\"telnet_host\":\"0.0.0.0\",\"telnet_port\":5116,\"no_websocket\":false,\"websocket_host\":\"0.0.0.0\",\"websocket_port\":5117,\"no_autostart\":true,\"no_gzip\":false,\"timeout\":null,\"co_mitigation\":\"Disabled\",\"throttle_requests\":0,\"sticky_follow\":false,\"manager\":false,\"expect_workers\":null,\"no_hash_check\":false,\"manager_bind_host\":\"\",\"manager_bind_port\":0,\"worker\":false,\"manager_host\":\"\",\"manager_port\":0}","success":true}
{"request":"stop"}
{"response":"load test not running, failed to stop","success":false}
{"request":"shutdown"}
{"response":"load test shut down","success":true}

Gaggle: Distributed Load Test

NOTE: Gaggle support was temporarily removed as of Goose 0.17.0 (see https://github.com/tag1consulting/goose/pull/529). Use Goose 0.16.4 if you need the functionality described in this section.

Goose also supports distributed load testing. A Gaggle is one Goose process running in Manager mode, and 1 or more Goose processes running in Worker mode. The Manager coordinates starting and stopping the Workers, and collects aggregated metrics. Gaggle support is a cargo feature that must be enabled at compile-time. To launch a Gaggle, you must copy your load test application to all servers from which you wish to generate load.

It is strongly recommended that the same load test application be copied to all servers involved in a Gaggle. By default, Goose will verify that the load test is identical by comparing a hash of all load test rules. Telling it to skip this check can cause the load test to panic (for example, if a Worker defines a different number of transactions or scenarios than the Manager).

Load Testing At Scale

Experimenting with running Goose load tests from AWS, Goose has proven to make fantastic use of all available system resources, so that it is only generally limited by network speeds. A smaller server instance was able to simulate 2,000 users generating over 6,500 requests per second and saturating a 2.6 Gbps uplink. As more uplink speed was added, Goose was able to scale linearly -- by distributing the test across two servers with faster uplinks, it comfortably simulated 12,000 active users generating over 41,000 requests per second and saturating 16 Gbps.

Generating this much traffic in and of itself is not fundamentally difficult, but with Goose each request is fully analyzed and validated. It not only confirms the response code for each response the server returns, but also inspects the returned HTML to confirm it contains all expected elements. Links to static elements such as images and CSS are extracted from each response and also loaded, with each simulated user behaving similar to how a real user would. Goose excels at providing consistent and repeatable load testing.

For full details and graphs, refer to the blog A Goose In The Clouds: Load Testing At Scale.

Gaggle Manager

NOTE: Gaggle support was temporarily removed as of Goose 0.17.0 (see https://github.com/tag1consulting/goose/pull/529). Use Goose 0.16.4 if you need the functionality described in this section.

To launch a Gaggle, you first must start a Goose application in Manager mode. All configuration happens in the Manager. To start, add the --manager flag and --expect-workers option, the latter necessary to tell the Manager process how many Worker processes it will be coordinating.

Example

Configure a Goose Manager to listen on all interfaces on the default port (0.0.0.0:5115), waiting for 2 Goose Worker processes.

cargo run --features gaggle --example simple -- --manager --expect-workers 2 --host http://local.dev/

Gaggle Worker

At this time, a Goose process can be either a Manager or a Worker, not both. Therefor, it usually makes sense to launch your first Worker on the same server that the Manager is running on. If not otherwise configured, a Goose Worker will try to connect to the Manager on the localhost.

Examples

Starting a Worker that connects to a Manager running on the same server:

cargo run --features gaggle --example simple -- --worker -v

In our earlier example, we expected 2 Workers. The second Goose process should be started on a different server. This will require telling it the host where the Goose Manager process is running. For example:

cargo run --example simple -- --worker --manager-host 192.168.1.55 -v

Once all expected Workers are running, the distributed load test will automatically start. We set the -v flag so Goose provides verbose output indicating what is happening. In our example, the load test will run until it is canceled. You can cancel the Manager or either of the Worker processes, and the test will stop on all servers.

Run-time Flags

NOTE: Gaggle support was temporarily removed as of Goose 0.17.0 (see https://github.com/tag1consulting/goose/pull/529). Use Goose 0.16.4 if you need the functionality described in this section.

  • --manager: starts a Goose process in Manager mode. There currently can only be one Manager per Gaggle.
  • --worker: starts a Goose process in Worker mode. How many Workers are in a given Gaggle is defined by the --expect-workers option, documented below.
  • --no-hash-check: tells Goose to ignore if the load test application doesn't match between Worker(s) and the Manager. This is not recommended, and can cause the application to panic.

The --no-metrics, --no-reset-metrics, --no-status-codes, and --no-hash-check flags must be set on the Manager. Workers inherit these flags from the Manager

Run-time Options

  • --manager-bind-host <manager-bind-host>: configures the host that the Manager listens on. By default Goose will listen on all interfaces, or 0.0.0.0.
  • --manager-bind-port <manager-bind-port>: configures the port that the Manager listens on. By default Goose will listen on port 5115.
  • --manager-host <manager-host>: configures the host that the Worker will talk to the Manager on. By default, a Goose Worker will connect to the localhost, or 127.0.0.1. In a distributed load test, this must be set to the IP of the Goose Manager.
  • --manager-port <manager-port>: configures the port that a Worker will talk to the Manager on. By default, a Goose Worker will connect to port 5115.

The --users, --startup-time, --hatch-rate, --host, and --run-time options must be set on the Manager. Workers inherit these options from the Manager.

The --throttle-requests option must be configured on each Worker, and can be set to a different value on each Worker if desired.

Gaggle Technical Details

NOTE: Gaggle support was temporarily removed as of Goose 0.17.0 (see https://github.com/tag1consulting/goose/pull/529). Use Goose 0.16.4 if you need the functionality described in this section.

Goose uses nng to send network messages between the Manager and all Workers. Serde and Serde CBOR are used to serialize messages into Concise Binary Object Representation.

Workers initiate all network connections, and push metrics to the Manager process.

Compile-time Feature

Gaggle support is a compile-time Cargo feature that must be enabled. Goose uses the nng library to manage network connections, and compiling nng requires that cmake be available.

The gaggle feature can be enabled from the command line by adding --features gaggle to your cargo command.

When writing load test applications, you can default to compiling in the Gaggle feature in the dependencies section of your Cargo.toml, for example:

[dependencies]
goose = { version = "^0.16", features = ["gaggle"] }

What is Coordinated Omission?

Coordinated Omission is a measurement problem that accidentally hides how many people are actually affected by server slowdowns during load testing.

The Race Timer Problem

Imagine you're timing runners in a race:

  • You're supposed to time a runner every 10 seconds
  • But sometimes a runner gets stuck in mud for 30 seconds
  • Your timer waits for that stuck runner before timing the next one
  • So instead of timing 6 runners per minute, you only time 2

Result: Your data says "average time: 15 seconds" when reality is most runners take 10 seconds, but some get stuck for 30+ seconds!

The issue: You're only measuring the runners who actually finish, missing all the runners who would have started during the delay.

How This Affects Load Testing

The Problem: Missing the Full Impact

Timeline of Goose User Thread:
Time 0s:  Send request ✅ (1 second response)
Time 1s:  Send request ✅ (1 second response)  
Time 2s:  Send request ✅ (1 second response)
Time 3s:  Send request... ⏳ (server freezes!)
Time 33s: Finally get response ❌ (30 seconds late!)
Time 34s: Send request ✅ (1 second response)

What traditional load testing records:

  • 4 requests total
  • Average response time: 8.25 seconds
  • "Looks like mostly good performance"

What really happened: During that 30-second freeze, 30 more requests should have been made but couldn't because the thread was stuck waiting. So instead of 4 requests, there should have been 34 requests affected by the server problem.

Why This Matters

When your server freezes for 30 seconds, EVERY user trying to access it during those 30 seconds is affected. Traditional load testing makes it look like only 1 user had problems, when really 30+ users experienced the issue.

This leads to dangerously optimistic reports:

  • ❌ "99% of requests were fast" (hiding the freeze)
  • ✅ "Server had a 30-second outage affecting 87% of traffic" (reality)

How Goose Fixes This

1. Detects Missing Requests

When Goose sees an abnormally long 30-second response, it recognizes: "During those 30 seconds, I should have made 30 requests but couldn't."

2. Synthetic Request Injection

Goose adds "synthetic requests" to represent the requests that would have been made if the server hadn't frozen, giving you a complete picture of impact.

3. Clear Reporting

=== COORDINATED OMISSION METRICS ===
Total CO Events: 1
Actual requests: 4  
Synthetic requests: 29 (87.9%)
Severity: 1 Critical event detected

This tells you: "87.9% of your expected traffic was affected by server problems!"

Visual Comparison

Before (Traditional Load Testing):

Timeline: |--1s--|--1s--|--------30s--------|--1s--|
Requests:    ✅     ✅          ❌           ✅
Result: "4 requests, looks mostly fine" ❌ MISLEADING

After (Goose with CO Mitigation):

Timeline: |--1s--|--1s--|--------30s--------|--1s--|  
Real:        ✅     ✅          ❌           ✅
Synthetic:              ❌❌❌❌❌...❌❌❌    (29 more)
Result: "34 total requests, 30-second freeze affected 87.9% of traffic" ✅ ACCURATE

The Key Insight

Server problems don't just affect one request, they affect ALL the requests that should have happened during the problem period. Goose now captures this reality, giving you honest data about your system's behavior under load.

Mitigation Strategies

Overview

Goose provides comprehensive protection against coordinated omission through its metrics collection architecture. By recording all request timings and maintaining detailed percentile distributions, Goose ensures that slow responses are properly represented in your load test results.

Built-in Protection

Complete Timing Capture

Goose's fundamental design prevents coordinated omission by:

  1. Recording Every Request: All request start and end times are captured, regardless of duration
  2. No Sampling: Unlike some tools, Goose doesn't sample metrics - every data point is recorded
  3. Async Architecture: Non-blocking request handling ensures slow responses don't prevent new requests
#![allow(unused)]
fn main() {
// Example: How Goose captures all timings
async fn user_function(user: &mut GooseUser) -> TransactionResult {
    // Start time is automatically recorded
    let _goose = user.get("/slow-endpoint").await?;
    // End time is recorded regardless of response duration
    // Even if this takes 30 seconds, it's properly tracked
    Ok(())
}
}

Accurate Percentile Calculation

Goose uses the hdrhistogram crate to maintain high-resolution timing distributions:

  • Microsecond precision: Timings are recorded with microsecond accuracy
  • Dynamic range: Handles response times from microseconds to minutes
  • Memory efficient: Compressed histogram format maintains accuracy without excessive memory use

Configuration Options

Request Timeout Settings

Configure appropriate timeouts to ensure all responses are captured:

# Set a 60-second request timeout (default is 60)
cargo run --release -- --request-timeout 60

# For extremely slow endpoints, increase further
cargo run --release -- --request-timeout 300

Coordinated Omission Mitigation Mode

Enable explicit coordinated omission mitigation for traditional closed-loop testing:

# Enable CO mitigation mode
cargo run --release -- --co-mitigation enabled

# With custom parameters
cargo run --release -- --co-mitigation enabled \
    --co-mitigation-expected-interval 100 \
    --co-mitigation-accuracy 2

When enabled, this mode:

  • Tracks expected vs actual request intervals
  • Adjusts metrics to account for delayed requests
  • Provides warnings when significant delays are detected

Understanding Your Results

Goose provides two sets of metrics:

  • Raw Metrics: Actual measurements from completed requests
  • CO-Adjusted Metrics: Include synthetic data points for requests that should have been made

Significant differences between these metrics indicate CO events occurred during your test.

Choosing Your Mitigation Strategy

Goose offers three CO mitigation modes via the --co-mitigation flag:

ModeUse CaseBehavior
disabledCustom analysis, external CO handlingNo adjustment, raw data only
average (default)General performance testingUses average response time as baseline
minimumStrict SLA compliance, microservicesUses minimum response time as baseline

When to Use Each Mode

Use minimum when:

  • Testing microservices with strict timing requirements
  • Validating SLA compliance
  • You need to detect ANY performance degradation
  • Testing in controlled environments

Use average when:

  • Simulating realistic user behavior
  • Testing public-facing websites
  • You want balanced synthetic data generation
  • General performance regression testing

Use disabled when:

  • Implementing custom CO mitigation
  • Performing specialized statistical analysis
  • You need only actual measurements
  • Comparing with other tools' raw output

Best Practices

1. Use Realistic User Counts

Avoid overwhelming your system with too few users:

# Better: More users with think time
cargo run --release -- --users 1000 --hatch-rate 10

# Worse: Few users hammering the system
cargo run --release -- --users 10 --hatch-rate 10

2. Monitor Response Time Distributions

Always review the full distribution, not just averages:

Response Time Percentiles:
50%: 45ms      # Median looks good
95%: 127ms     # 95th percentile reasonable
99%: 894ms     # 99th shows degradation
99.9%: 5,234ms # Long tail reveals issues

3. Set Appropriate Timeouts

Balance between capturing slow responses and test duration:

#![allow(unused)]
fn main() {
use goose::prelude::*;

// Configure per-request timeouts
let _goose = user.get("/endpoint")
    .set_timeout(Duration::from_secs(30))
    .await?;
}

4. Use Test Plans for Controlled Load

Test plans help maintain consistent request rates:

[testplan]
# Gradual ramp-up prevents overwhelming the system
"0s" = "0"
"30s" = "100"
"1m" = "100"
"2m30s" = "200"
"5m" = "200"
"6m" = "0"

How It Works

When using average mode (default when CO mitigation is enabled), Goose will trigger Coordinated Omission Mitigation if the time to loop through a Scenario takes more than twice as long as the average time of all previous loops. In this case, on the next loop through the Scenario when tracking the actual metrics for each subsequent request in all Transaction it will also add in statistically generated "requests" with a response_time starting at the unexpectedly long request time, then again with that response_time minus the normal "cadence", continuing to generate a metric then subtract the normal "cadence" until arriving at the expected response_time. In this way, Goose is able to estimate the actual effect of a slowdown.

When Goose detects an abnormally slow request (one in which the individual request takes longer than the normal user_cadence), it will generate an INFO level message (which will be visible on the command line (unless --no-print-metrics is enabled), and written to the log if started with the -g run time flag and --goose-log is configured).

Verification Techniques

1. Compare with Expected Throughput

Calculate theoretical vs actual request rates:

# Expected requests per second
expected_rps = users * (1000 / avg_think_time_ms)

# Compare with actual from Goose metrics
actual_rps = total_requests / test_duration_seconds

# Large discrepancies indicate CO issues
co_factor = expected_rps / actual_rps

2. Analyze Response Time Variance

High variance often indicates coordinated omission:

# Look for these warning signs in metrics:
# - Standard deviation > mean response time
# - 99th percentile > 10x median
# - Maximum response time orders of magnitude higher

3. Monitor Active Transaction Counts

Track concurrent in-flight requests:

#![allow(unused)]
fn main() {
// Use GooseMetrics to monitor active transactions
// Sustained high counts indicate queueing/delays
}

Examples

An example of a request triggering Coordinate Omission mitigation:

13:10:30 [INFO] 11.401s into goose attack: "GET http://apache/node/1557" [200] took abnormally long (1814 ms), transaction name: "(Anon) node page"
13:10:30 [INFO] 11.450s into goose attack: "GET http://apache/node/5016" [200] took abnormally long (1769 ms), transaction name: "(Anon) node page"

If the --request-log is enabled, you can get more details, in this case by looking for elapsed times matching the above messages, specifically 1,814 and 1,769 respectively:

{"coordinated_omission_elapsed":0,"elapsed":11401,"error":"","final_url":"http://apache/node/1557","method":"Get","name":"(Anon) node page","redirected":false,"response_time":1814,"status_code":200,"success":true,"update":false,"url":"http://apache/node/1557","user":2,"user_cadence":1727}
{"coordinated_omission_elapsed":0,"elapsed":11450,"error":"","final_url":"http://apache/node/5016","method":"Get","name":"(Anon) node page","redirected":false,"response_time":1769,"status_code":200,"success":true,"update":false,"url":"http://apache/node/5016","user":0,"user_cadence":1422}

In the requests file, you can see that two different user threads triggered Coordinated Omission Mitigation, specifically threads 2 and 0. Both GooseUser threads were loading the same Transaction as due to transaction weighting this is the transaction loaded the most frequently. Both GooseUser threads loop through all Transaction in a similar amount of time: thread 2 takes on average 1.727 seconds, thread 0 takes on average 1.422 seconds.

Also if the --request-log is enabled, requests back-filled by Coordinated Omission Mitigation show up in the generated log file, even though they were not actually sent to the server. Normal requests not generated by Coordinated Omission Mitigation have a coordinated_omission_elapsed of 0.

Advanced Techniques

Custom Metrics Collection

Implement additional CO detection:

#![allow(unused)]
fn main() {
use goose::prelude::*;
use std::time::Instant;

async fn monitored_request(user: &mut GooseUser) -> TransactionResult {
    let intended_start = Instant::now();
    
    // Your actual request
    let result = user.get("/endpoint").await?;
    
    let actual_start = result.request.start_time;
    let schedule_delay = actual_start.duration_since(intended_start);
    
    // Log if request was significantly delayed
    if schedule_delay.as_millis() > 100 {
        user.log_debug(&format!(
            "Request delayed by {}ms", 
            schedule_delay.as_millis()
        ))?;
    }
    
    Ok(())
}
}

Real-time Monitoring

Use Goose's controllers for live detection:

# Enable real-time metrics via WebSocket
cargo run --release -- --websocket-host 0.0.0.0 --websocket-port 5117

# Monitor for:
# - Sudden drops in request rate
# - Spikes in response times
# - Increasing queue depths

Statistical Analysis Note

While Goose provides comprehensive data for analysis, determining statistical significance of performance changes requires additional tools and expertise. Goose produces the raw data you need, but interpretation remains your responsibility.

For detailed analysis, consider:

  • Kolmogorov-Smirnov or Anderson-Darling tests for distribution comparison
  • Note that CO-adjusted data is derived from raw data (not statistically independent)
  • Export data via --request-log for external analysis

Summary

Goose's architecture inherently protects against coordinated omission through:

  1. Comprehensive data collection - Every request is tracked
  2. Accurate percentile calculations - Full distributions preserved
  3. Flexible configuration - Timeouts and modes for various scenarios
  4. Real-time visibility - Monitor and detect issues during tests

By following these practices and utilizing Goose's built-in protections, you can ensure your load test results accurately reflect real-world system behavior under load.

Metrics

When Coordinated Omission Mitigation kicks in, Goose tracks both the "raw" metrics and the "adjusted" metrics. It shows both together when displaying metrics, first the "raw" (actually seen) metrics, followed by the "adjusted" metrics. As the minimum response time is never changed by Coordinated Omission Mitigation, this column is replacd with the "standard deviation" between the average "raw" response time, and the average "adjusted" response time.

The following example was "contrived". The drupal_memcache example was run for 15 seconds, and after 10 seconds the upstream Apache server was manually "paused" for 3 seconds, forcing some abnormally slow queries. (More specifically, the apache web server was started by running . /etc/apache2/envvars && /usr/sbin/apache2 -DFOREGROUND, it was "paused" by pressing ctrl-z, and it was resumed three seconds later by typing fg.) In the "PER REQUEST METRICS" Goose shows first the "raw" metrics", followed by the "adjusted" metrics:

 ------------------------------------------------------------------------------
 Name                     |    Avg (ms) |        Min |         Max |     Median
 ------------------------------------------------------------------------------
 GET (Anon) front page    |       11.73 |          3 |          81 |         12
 GET (Anon) node page     |       81.76 |          5 |       3,390 |         37
 GET (Anon) user page     |       27.53 |         16 |          94 |         26
 GET (Auth) comment form  |       35.27 |         24 |          50 |         35
 GET (Auth) front page    |       30.68 |         20 |         111 |         26
 GET (Auth) node page     |       97.79 |         23 |       3,326 |         35
 GET (Auth) user page     |       25.20 |         21 |          30 |         25
 GET static asset         |        9.27 |          2 |          98 |          6
 POST (Auth) comment form |       52.47 |         43 |          59 |         52
 -------------------------+-------------+------------+-------------+-----------
 Aggregated               |       17.04 |          2 |       3,390 |          8
 ------------------------------------------------------------------------------
 Adjusted for Coordinated Omission:
 ------------------------------------------------------------------------------
 Name                     |    Avg (ms) |    Std Dev |         Max |     Median
 ------------------------------------------------------------------------------
 GET (Anon) front page    |      419.82 |     288.56 |       3,153 |         14
 GET (Anon) node page     |      464.72 |     270.80 |       3,390 |         40
 GET (Anon) user page     |      420.48 |     277.86 |       3,133 |         27
 GET (Auth) comment form  |      503.38 |     331.01 |       2,951 |         37
 GET (Auth) front page    |      489.99 |     324.78 |       2,960 |         33
 GET (Auth) node page     |      530.29 |     305.82 |       3,326 |         37
 GET (Auth) user page     |      500.67 |     336.21 |       2,959 |         27
 GET static asset         |      427.70 |     295.87 |       3,154 |          9
 POST (Auth) comment form |      512.14 |     325.04 |       2,932 |         55
 -------------------------+-------------+------------+-------------+-----------
 Aggregated               |      432.98 |     294.11 |       3,390 |         14

From these two tables, we can observe a notable difference between the raw and adjusted metrics. The standard deviation between the "raw" average and the "adjusted" average is considerably larger than the "raw" average, indicating that a performance event occurred that affected request timing. Whether this indicates a "valid" load test depends on your specific goals and testing context.

Note: It is beyond the scope of Goose to test for statistically significant changes in the right-tail, or other locations, of the distribution of response times. Goose produces the raw data you need to conduct these tests. For detailed statistical analysis, consider using tools like the Kolmogorov-Smirnov or Anderson-Darling tests to compare distributions. Keep in mind that CO-adjusted data is derived from raw data and thus not statistically independent.

Goose also shows multiple percentile graphs, again showing first the "raw" metrics followed by the "adjusted" metrics. The "raw" graph would suggest that less than 1% of the requests for the GET (Anon) node page were slow, and less than 0.1% of the requests for the GET (Auth) node page were slow. However, through Coordinated Omission Mitigation we can see that statistically this would have actually affected all requests, and for authenticated users the impact is visible on >25% of the requests.

 ------------------------------------------------------------------------------
 Slowest page load within specified percentile of requests (in ms):
 ------------------------------------------------------------------------------
 Name                     |    50% |    75% |    98% |    99% |  99.9% | 99.99%
 ------------------------------------------------------------------------------
 GET (Anon) front page    |     12 |     15 |     25 |     27 |     81 |     81
 GET (Anon) node page     |     37 |     43 |     60 |  3,000 |  3,000 |  3,000
 GET (Anon) user page     |     26 |     28 |     34 |     93 |     94 |     94
 GET (Auth) comment form  |     35 |     37 |     50 |     50 |     50 |     50
 GET (Auth) front page    |     26 |     34 |     45 |     88 |    110 |    110
 GET (Auth) node page     |     35 |     38 |     58 |     58 |  3,000 |  3,000
 GET (Auth) user page     |     25 |     27 |     30 |     30 |     30 |     30
 GET static asset         |      6 |     14 |     21 |     22 |     81 |     98
 POST (Auth) comment form |     52 |     55 |     59 |     59 |     59 |     59
 -------------------------+--------+--------+--------+--------+--------+-------
 Aggregated               |      8 |     16 |     47 |     53 |  3,000 |  3,000
 ------------------------------------------------------------------------------
 Adjusted for Coordinated Omission:
 ------------------------------------------------------------------------------
 Name                     |    50% |    75% |    98% |    99% |  99.9% | 99.99%
 ------------------------------------------------------------------------------
 GET (Anon) front page    |     14 |     21 |  3,000 |  3,000 |  3,000 |  3,000
 GET (Anon) node page     |     40 |     55 |  3,000 |  3,000 |  3,000 |  3,000
 GET (Anon) user page     |     27 |     32 |  3,000 |  3,000 |  3,000 |  3,000
 GET (Auth) comment form  |     37 |    400 |  2,951 |  2,951 |  2,951 |  2,951
 GET (Auth) front page    |     33 |    410 |  2,960 |  2,960 |  2,960 |  2,960
 GET (Auth) node page     |     37 |    410 |  3,000 |  3,000 |  3,000 |  3,000
 GET (Auth) user page     |     27 |    420 |  2,959 |  2,959 |  2,959 |  2,959
 GET static asset         |      9 |     20 |  3,000 |  3,000 |  3,000 |  3,000
 POST (Auth) comment form |     55 |    390 |  2,932 |  2,932 |  2,932 |  2,932
 -------------------------+--------+--------+--------+--------+--------+-------
 Aggregated               |     14 |     42 |  3,000 |  3,000 |  3,000 |  3,000

The Coordinated Omission metrics will also show up in the HTML report generated when Goose is started with the --report-file run-time option. If Coordinated Omission mitigation kicked in, the HTML report will include both the "raw" metrics and the "adjusted" metrics.

Enhanced CO Event Tracking

In addition to the raw and adjusted metrics, Goose now provides detailed Coordinated Omission event tracking that appears in all report formats (console, HTML, markdown, and JSON). This enhanced tracking provides comprehensive insights into when and how CO events affected your test.

CO Event Metrics Display

When CO events occur during your test, you'll see a dedicated "COORDINATED OMISSION METRICS" section that appears before the overview:

 === COORDINATED OMISSION METRICS ===
 Duration: 45 seconds
 Total CO Events: 12
 Events per minute: 16.00

 Request Breakdown:
   Actual requests: 2,847
   Synthetic requests: 156 (5.2%)

 Severity Distribution:
   Minor: 8
   Moderate: 3
   Severe: 1
   Critical: 0

Understanding CO Event Severity

Goose classifies CO events based on how much longer the actual response took compared to the expected cadence:

  • Minor (2-5x): Response took 2-5 times longer than expected
  • Moderate (5-10x): Response took 5-10 times longer than expected
  • Severe (10-20x): Response took 10-20 times longer than expected
  • Critical (>20x): Response took more than 20 times longer than expected

Interpreting Synthetic Request Percentage

The synthetic request percentage tells you how much of your data comes from CO mitigation:

  • <10%: High confidence in results, minimal CO impact
  • 10-30%: Medium confidence, some CO events occurred
  • 30-50%: Lower confidence, significant CO impact
  • >50%: Results heavily influenced by synthetic data

Practical Example: Microservice Testing

Consider testing a microservice with strict 100ms SLA requirements:

# Test with minimum cadence for strict SLA validation
cargo run --example api_test -- \
    --host https://api.example.com \
    --users 50 \
    --run-time 5m \
    --co-mitigation minimum

# Results might show:
# === COORDINATED OMISSION METRICS ===
# Duration: 300 seconds
# Total CO Events: 45
# Events per minute: 9.00
# 
# Request Breakdown:
#   Actual requests: 14,523
#   Synthetic requests: 892 (5.8%)
# 
# Severity Distribution:
#   Minor: 38    # Most events were 2-5x slower than expected
#   Moderate: 6  # Some 5-10x slower
#   Severe: 1    # One event 10-20x slower
#   Critical: 0  # No critical events

This tells you that while most requests met the SLA, there were 45 instances where performance degraded, affecting 5.8% of your measurements. The predominance of "Minor" events suggests occasional but not severe performance issues.

Practical Example: Web Application Testing

For a public-facing web application with more tolerance for variance:

# Test with average cadence for realistic user simulation
cargo run --example webapp_test -- \
    --host https://webapp.example.com \
    --users 200 \
    --run-time 10m \
    --co-mitigation average

# Results might show:
# === COORDINATED OMISSION METRICS ===
# Duration: 600 seconds
# Total CO Events: 8
# Events per minute: 0.80
# 
# Request Breakdown:
#   Actual requests: 28,945
#   Synthetic requests: 67 (0.2%)
# 
# Severity Distribution:
#   Minor: 5
#   Moderate: 2
#   Severe: 1
#   Critical: 0

This shows a much healthier system with only occasional CO events and minimal synthetic data generation (0.2%), indicating the system handled the load well.

When to Be Concerned

Red flags in CO metrics:

  • Synthetic request percentage >30%
  • High frequency of Severe or Critical events
  • Events per minute consistently >10
  • Large gaps between raw and adjusted percentiles

Green flags:

  • Synthetic request percentage <10%
  • Mostly Minor events with few Moderate
  • Low events per minute (<5)
  • Small differences between raw and adjusted metrics

Using CO Metrics for Capacity Planning

CO event tracking helps with capacity planning:

  1. Identify Breaking Points: Watch for sudden increases in CO events as load increases
  2. SLA Validation: Use minimum cadence mode to catch any SLA violations
  3. Performance Regression: Compare CO metrics across test runs to detect degradation
  4. Resource Scaling: CO events often indicate when additional resources are needed

The enhanced CO metrics provide the detailed insights needed to understand not just that performance issues occurred, but their frequency, severity, and impact on your test results.

Practical Examples

This chapter provides real-world examples of when and how to use different Coordinated Omission mitigation strategies. Each example includes the command to run, expected output, and interpretation guidance.

Example 1: Microservice SLA Validation

Scenario: You're testing a payment processing microservice that must respond within 100ms for 99% of requests.

Goal: Detect any SLA violations, no matter how brief.

Strategy: Use minimum cadence to catch even momentary slowdowns.

# Test command
cargo run --example payment_service -- \
    --host https://payments.api.company.com \
    --users 20 \
    --run-time 5m \
    --co-mitigation minimum \
    --report-file payment_test.html

# Expected healthy output:
# === COORDINATED OMISSION METRICS ===
# Duration: 300 seconds
# Total CO Events: 2
# Events per minute: 0.40
# 
# Request Breakdown:
#   Actual requests: 18,450
#   Synthetic requests: 12 (0.1%)
# 
# Severity Distribution:
#   Minor: 2
#   Moderate: 0
#   Severe: 0
#   Critical: 0

Interpretation:

  • Excellent: Only 2 minor CO events in 5 minutes
  • SLA Met: 0.1% synthetic requests indicates 99.9% of requests met timing expectations
  • No Critical Issues: No severe or critical events

Red Flag Example:

# Problematic output:
# === COORDINATED OMISSION METRICS ===
# Duration: 300 seconds
# Total CO Events: 45
# Events per minute: 9.00
# 
# Request Breakdown:
#   Actual requests: 18,450
#   Synthetic requests: 892 (4.6%)
# 
# Severity Distribution:
#   Minor: 38
#   Moderate: 6
#   Severe: 1
#   Critical: 0

Action Required: 4.6% synthetic requests and frequent CO events indicate the service is struggling to meet SLA requirements consistently.

Example 2: E-commerce Website Load Testing

Scenario: Testing an e-commerce site during Black Friday preparation. Users can tolerate some variability, but you want to understand overall performance.

Goal: Simulate realistic user behavior while detecting significant performance issues.

Strategy: Use average cadence for balanced detection.

# Test command
cargo run --example ecommerce_site -- \
    --host https://shop.company.com \
    --users 500 \
    --run-time 15m \
    --co-mitigation average \
    --report-file blackfriday_test.html

# Expected healthy output:
# === COORDINATED OMISSION METRICS ===
# Duration: 900 seconds
# Total CO Events: 12
# Events per minute: 0.80
# 
# Request Breakdown:
#   Actual requests: 145,230
#   Synthetic requests: 234 (0.2%)
# 
# Severity Distribution:
#   Minor: 8
#   Moderate: 3
#   Severe: 1
#   Critical: 0

Interpretation:

  • Good Performance: Low CO event rate (0.8/minute)
  • Minimal Impact: Only 0.2% synthetic requests
  • ⚠️ Monitor: One severe event warrants investigation

Concerning Example:

# Problematic output:
# === COORDINATED OMISSION METRICS ===
# Duration: 900 seconds
# Total CO Events: 156
# Events per minute: 10.40
# 
# Request Breakdown:
#   Actual requests: 145,230
#   Synthetic requests: 12,450 (7.9%)
# 
# Severity Distribution:
#   Minor: 89
#   Moderate: 45
#   Severe: 18
#   Critical: 4

Action Required: High CO event rate and 7.9% synthetic requests indicate the site will struggle under Black Friday load. Scale up resources or optimize performance.

Example 3: API Gateway Performance Testing

Scenario: Testing an API gateway that routes requests to multiple backend services. You want to understand how backend slowdowns affect the gateway.

Goal: Detect when backend issues cause gateway performance degradation.

Strategy: Use average cadence with longer test duration to capture intermittent issues.

# Test command
cargo run --example api_gateway -- \
    --host https://gateway.api.company.com \
    --users 100 \
    --run-time 30m \
    --co-mitigation average \
    --report-file gateway_test.html

# Healthy distributed system output:
# === COORDINATED OMISSION METRICS ===
# Duration: 1800 seconds
# Total CO Events: 23
# Events per minute: 0.77
# 
# Request Breakdown:
#   Actual requests: 324,500
#   Synthetic requests: 445 (0.1%)
# 
# Severity Distribution:
#   Minor: 18
#   Moderate: 4
#   Severe: 1
#   Critical: 0

Interpretation:

  • Stable Gateway: Low synthetic percentage indicates good overall performance
  • Resilient: Minor events suggest the gateway handles backend hiccups well
  • Scalable: Consistent performance over 30 minutes

Example 4: Database Connection Pool Testing

Scenario: Testing an application's database connection pool under load to ensure it doesn't become a bottleneck.

Goal: Detect connection pool exhaustion or database slowdowns.

Strategy: Use minimum cadence to catch any database-related delays immediately.

# Test command
cargo run --example database_app -- \
    --host https://app.company.com \
    --users 200 \
    --run-time 10m \
    --co-mitigation minimum \
    --report-file db_pool_test.html

# Healthy connection pool output:
# === COORDINATED OMISSION METRICS ===
# Duration: 600 seconds
# Total CO Events: 8
# Events per minute: 0.80
# 
# Request Breakdown:
#   Actual requests: 89,450
#   Synthetic requests: 67 (0.1%)
# 
# Severity Distribution:
#   Minor: 6
#   Moderate: 2
#   Severe: 0
#   Critical: 0

Pool Exhaustion Example:

# Connection pool exhaustion:
# === COORDINATED OMISSION METRICS ===
# Duration: 600 seconds
# Total CO Events: 234
# Events per minute: 23.40
# 
# Request Breakdown:
#   Actual requests: 89,450
#   Synthetic requests: 8,920 (9.1%)
# 
# Severity Distribution:
#   Minor: 45
#   Moderate: 123
#   Severe: 56
#   Critical: 10

Action Required: High CO event rate and 9.1% synthetic requests indicate connection pool exhaustion. Increase pool size or optimize database queries.

Example 5: CDN Performance Validation

Scenario: Testing how your application performs when the CDN is slow or unavailable.

Goal: Understand the impact of CDN issues on user experience.

Strategy: Use average cadence to simulate realistic user tolerance.

# Test command with CDN issues simulated
cargo run --example cdn_test -- \
    --host https://app.company.com \
    --users 150 \
    --run-time 10m \
    --co-mitigation average \
    --report-file cdn_impact_test.html

# CDN issues detected:
# === COORDINATED OMISSION METRICS ===
# Duration: 600 seconds
# Total CO Events: 89
# Events per minute: 8.90
# 
# Request Breakdown:
#   Actual requests: 67,230
#   Synthetic requests: 2,340 (3.4%)
# 
# Severity Distribution:
#   Minor: 34
#   Moderate: 38
#   Severe: 15
#   Critical: 2

Interpretation:

  • ⚠️ CDN Impact: 3.4% synthetic requests show CDN issues affect user experience
  • ⚠️ User Frustration: Moderate and severe events indicate noticeable delays
  • 📊 Business Impact: Use this data to justify CDN redundancy or optimization

Example 6: Baseline Testing (No CO Expected)

Scenario: Testing a well-optimized system under normal load to establish performance baselines.

Goal: Confirm the system performs consistently without CO events.

Strategy: Use disabled to get pure measurements, then compare with average mode.

# First, test with CO disabled for baseline
cargo run --example baseline_test -- \
    --host https://optimized.company.com \
    --users 100 \
    --run-time 10m \
    --co-mitigation disabled \
    --report-file baseline_raw.html

# Then test with CO detection enabled
cargo run --example baseline_test -- \
    --host https://optimized.company.com \
    --users 100 \
    --run-time 10m \
    --co-mitigation average \
    --report-file baseline_co.html

# Expected output (CO enabled):
# === COORDINATED OMISSION METRICS ===
# Duration: 600 seconds
# Total CO Events: 0
# Events per minute: 0.00
# 
# Request Breakdown:
#   Actual requests: 45,670
#   Synthetic requests: 0 (0.0%)

Perfect Baseline: Zero CO events and 0% synthetic requests indicate the system performs consistently under this load level.

Interpreting Results Across Examples

Green Flags (Healthy System)

  • CO events per minute < 2
  • Synthetic request percentage < 1%
  • Mostly Minor severity events
  • Consistent performance across test duration

Yellow Flags (Monitor Closely)

  • CO events per minute 2-10
  • Synthetic request percentage 1-5%
  • Some Moderate severity events
  • Occasional performance dips

Red Flags (Action Required)

  • CO events per minute > 10
  • Synthetic request percentage > 5%
  • Frequent Severe or any Critical events
  • Degrading performance over time

Using CO Metrics for Capacity Planning

  1. Find Breaking Point: Gradually increase load until CO events spike
  2. Set Alerts: Monitor CO metrics in production to detect issues early
  3. Compare Environments: Use CO metrics to validate staging vs production performance
  4. Track Trends: Monitor CO metrics over time to detect performance regression

Best Practices Summary

  1. Choose the Right Mode:

    • minimum for strict SLA validation
    • average for realistic user simulation
    • disabled for baseline measurements
  2. Set Appropriate Test Duration:

    • Short tests (5-10 min) for quick validation
    • Long tests (30+ min) for stability assessment
  3. Monitor Key Metrics:

    • Events per minute rate
    • Synthetic request percentage
    • Severity distribution
    • Trends over time
  4. Take Action Based on Results:

    • < 1% synthetic: System healthy
    • 1-5% synthetic: Monitor and investigate
    • 5% synthetic: Performance issues need attention

These examples provide a foundation for understanding how CO metrics help identify and quantify performance issues in different scenarios. Use them as templates for your own testing strategies.

Configuration

Configuration of Goose load tests is done in Rust code within the load test plan. Complete documentation of all load test configuration can be found in the developer documentation.

Defaults

All run-time options can be configured with custom defaults. For example, you may want to default to the the host name of your local development environment, only requiring that --host be set when running against a production environment. Assuming your local development environment is at "http://local.dev/" you can do this as follows:

    GooseAttack::initialize()?
        .register_scenario(scenario!("LoadtestTransactions")
            .register_transaction(transaction!(loadtest_index))
        )
        .set_default(GooseDefault::Host, "http://local.dev/")?
        .execute()
        .await?;

    Ok(())

The following defaults can be configured with a &str:

  • host: GooseDefault::Host
  • set a per-request timeout: GooseDefault::Timeout
  • users to start per second: GooseDefault::HatchRate
  • report file names: GooseDefault::ReportFile
  • goose log file name: GooseDefault::GooseLog
  • request log file name: GooseDefault::RequestLog
  • transaction log file name: GooseDefault::TransactionLog
  • error log file name: GooseDefault::ErrorLog
  • debug log file name: GooseDefault::DebugLog
  • test plan: GooseDefault::TestPlan
  • host to bind telnet Controller to: GooseDefault::TelnetHost
  • host to bind WebSocket Controller to: GooseDefault::WebSocketHost
  • host to bind Manager to: GooseDefault::ManagerBindHost
  • host for Worker to connect to: GooseDefault::ManagerHost

The following defaults can be configured with a usize integer:

  • total users to start: GooseDefault::Users
  • how quickly to start all users: GooseDefault::StartupTime
  • how often to print running metrics: GooseDefault::RunningMetrics
  • number of seconds for test to run: GooseDefault::RunTime
  • log level: GooseDefault::LogLevel
  • quiet: GooseDefault::Quiet
  • verbosity: GooseDefault::Verbose
  • maximum requests per second: GooseDefault::ThrottleRequests
  • number of Workers to expect: GooseDefault::ExpectWorkers
  • port to bind telnet Controller to: GooseDefault::TelnetPort
  • port to bind WebSocket Controller to: GooseDefault::WebSocketPort
  • port to bind Manager to: GooseDefault::ManagerBindPort
  • port for Worker to connect to: GooseDefault::ManagerPort

The following defaults can be configured with a bool:

  • do not reset metrics after all users start: GooseDefault::NoResetMetrics
  • do not print metrics: GooseDefault::NoPrintMetrics
  • do not track metrics: GooseDefault::NoMetrics
  • do not track transaction metrics: GooseDefault::NoTransactionMetrics
  • do not log the request body in the error log: GooseDefault::NoRequestBody
  • do not display the error summary: GooseDefault::NoErrorSummary
  • do not log the response body in the debug log: GooseDefault::NoDebugBody
  • do not start telnet Controller thread: GooseDefault::NoTelnet
  • do not start WebSocket Controller thread: GooseDefault::NoWebSocket
  • do not autostart load test, wait instead for a Controller to start: GooseDefault::NoAutoStart
  • do not gzip compress requests: GooseDefault::NoGzip
  • do not track status codes: GooseDefault::NoStatusCodes
  • follow redirect of base_url: GooseDefault::StickyFollow
  • enable Manager mode: GooseDefault::Manager
  • enable Worker mode: GooseDefault::Worker
  • ignore load test checksum: GooseDefault::NoHashCheck
  • do not collect granular data in the reports: GooseDefault::NoGranularData

The following defaults can be configured with a GooseLogFormat:

  • request log file format: GooseDefault::RequestFormat
  • transaction log file format: GooseDefault::TransactionFormat
  • error log file format: GooseDefault::ErrorFormat
  • debug log file format: GooseDefault::DebugFormat

The following defaults can be configured with a GooseCoordinatedOmissionMitigation:

  • default Coordinated Omission Mitigation strategy: GooseDefault::CoordinatedOmissionMitigation

For example, without any run-time options the following load test would automatically run against local.dev, logging metrics to goose-metrics.log and debug to goose-debug.log. It will automatically launch 20 users in 4 seconds, and run the load test for 15 minutes. Metrics will be displayed every minute during the test, and the status code table will be disabled. The order the defaults are set is not important.

    GooseAttack::initialize()?
        .register_scenario(scenario!("LoadtestTransactions")
            .register_transaction(transaction!(loadtest_index))
        )
        .set_default(GooseDefault::Host, "local.dev")?
        .set_default(GooseDefault::RequestLog, "goose-requests.log")?
        .set_default(GooseDefault::DebugLog, "goose-debug.log")?
        .set_default(GooseDefault::Users, 20)?
        .set_default(GooseDefault::HatchRate, 4)?
        .set_default(GooseDefault::RunTime, 900)?
        .set_default(GooseDefault::RunningMetrics, 60)?
        .set_default(GooseDefault::NoStatusCodes, true)?
        .execute()
        .await?;

    Ok(())

Find a complete list of all configuration options that can be configured with custom defaults in the developer documentation, as well as complete details on how to configure defaults.

Scheduling Scenarios And Transactions

When starting a load test, Goose assigns one Scenario to each GooseUser thread. By default, it assigns Scenario (and then Transaction within the scenario) in a round robin order. As new GooseUser threads are launched, the first will be assigned the first defined Scenario, the next will be assigned the next defined Scenario, and so on, looping through all available Scenario. Weighting is respected during this process, so if one Scenario is weighted heavier than others, that Scenario will get assigned to GooseUser more at the end of the launching process.

The GooseScheduler can be configured to instead launch Scenario and Transaction in a Serial or a Random order. When configured to allocate in a Serial order, Scenario and Transaction are launched in the extact order they are defined in the load test (see below for more detail on how this works). When configured to allocate in a Random order, running the same load test multiple times can lead to different amounts of load being generated.

Prior to Goose 0.10.6 Scenario were allocated in a serial order. Prior to Goose 0.11.1 Transaction were allocated in a serial order. To restore the old behavior, you can use the GooseAttack::set_scheduler() method as follows:

    GooseAttack::initialize()?
        .set_scheduler(GooseScheduler::Serial);

To instead randomize the order that Scenario and Transaction are allocated, you can instead configure as follows:

    GooseAttack::initialize()?
        .set_scheduler(GooseScheduler::Random);

The following configuration is possible but superfluous because it is the scheduling default, and is therefor how Goose behaves even if the .set_scheduler() method is not called at all:

    GooseAttack::initialize()?
        .set_scheduler(GooseScheduler::RoundRobin);

Scheduling Example

The following simple example helps illustrate how the different schedulers work.

use goose::prelude::*;

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    GooseAttack::initialize()?
        .register_scenario(scenario!("Scenario1")
            .register_transaction(transaction!(transaction1).set_weight(2)?)
            .register_transaction(transaction!(transaction2))
            .set_weight(2)?
        )
        .register_scenario(scenario!("Scenario2")
            .register_transaction(transaction!(transaction1))
            .register_transaction(transaction!(transaction2).set_weight(2)?)
        )
        .execute()
        .await?;

    Ok(())
}

Round Robin Scheduler

This first example assumes the default of .set_scheduler(GooseScheduler::RoundRobin).

If Goose is told to launch only two users, the first GooseUser will run Scenario1 and the second user will run Scenario2. Even though Scenario1 has a weight of 2 GooseUser are allocated round-robin so with only two users the second instance of Scenario1 is never launched.

The GooseUser running Scenario1 will then launch transactions repeatedly in the following order: transactions1, transactions2, transaction1. If it runs through twice, then it runs all of the following transactions in the following order: transaction1, transaction2, transaction1, transaction1, transaction2, transaction1.

Serial Scheduler

This second example assumes the manual configuration of .set_scheduler(GooseScheduler::Serial).

If Goose is told to launch only two users, then both GooseUser will launch Scenario1 as it has a weight of 2. Scenario2 will not get assigned to either of the users.

Both GooseUser running Scenario1 will then launch transactions repeatedly in the following order: transaction1, transaction1, transaction2. If it runs through twice, then it runs all of the following transactions in the following order: transaction1, transaction1, transaction2, transaction1, transaction1, transaction2.

Random Scheduler

This third example assumes the manual configuration of .set_scheduler(GooseScheduler::Random).

If Goose is told to launch only two users, the first will be randomly assigned either Scenario1 or Scenario2. Regardless of which is assigned to the first user, the second will again be randomly assigned either Scenario1 or Scenario2. If the load test is stopped and run again, there users are randomly re-assigned, there is no consistency between load test runs.

Each GooseUser will run transactions in a random order. The random order will be determined at start time and then will run repeatedly in this random order as long as the user runs.

RustLS

By default Reqwest (and therefore Goose) uses the system-native transport layer security to make HTTPS requests. This means schannel on Windows, Security-Framework on macOS, and OpenSSL on Linux. If you'd prefer to use a pure Rust TLS implementation, disable default features and enable rustls-tls in Cargo.toml as follows:

[dependencies]
goose = { version = "^0.18", default-features = false, features = ["rustls-tls"] }

Examples

Goose includes several examples to demonstrate load test functionality, including:

Simple Example

The examples/simple.rs example copies the simple load test documented on the locust.io web page, rewritten in Rust for Goose. It uses minimal advanced functionality, but demonstrates how to GET and POST pages. It defines a single Scenario which has the user log in and then loads a couple of pages.

Goose can make use of all available CPU cores. By default, it will launch 1 user per core, and it can be configured to launch many more. The following was configured instead to launch 1,024 users. Each user randomly pauses 5 to 15 seconds after each transaction is loaded, so it's possible to spin up a large number of users. Here is a snapshot of top when running this example on a 1-core VM with 10G of available RAM -- there were ample resources to launch considerably more "users", though ulimit had to be resized:

top - 06:56:06 up 15 days,  3:13,  2 users,  load average: 0.22, 0.10, 0.04
Tasks: 116 total,   3 running, 113 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.7 us,  0.7 sy,  0.0 ni, 96.7 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
MiB Mem :   9994.9 total,   7836.8 free,   1101.2 used,   1056.9 buff/cache
MiB Swap:  10237.0 total,  10237.0 free,      0.0 used.   8606.9 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 1339 goose     20   0 1235480 758292   8984 R   3.0   7.4   0:06.56 simple

Complete Source Code

//! Simple Goose load test example. Duplicates the simple example on the
//! Locust project page (<https://locust.io/>).
//!
//! ## License
//!
//! Copyright 2020-2022 Jeremy Andrews
//!
//! Licensed under the Apache License, Version 2.0 (the "License");
//! you may not use this file except in compliance with the License.
//! You may obtain a copy of the License at
//!
//! <http://www.apache.org/licenses/LICENSE-2.0>
//!
//! Unless required by applicable law or agreed to in writing, software
//! distributed under the License is distributed on an "AS IS" BASIS,
//! WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//! See the License for the specific language governing permissions and
//! limitations under the License.

use goose::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    GooseAttack::initialize()?
        // In this example, we only create a single scenario, named "WebsiteUser".
        .register_scenario(
            scenario!("WebsiteUser")
                // After each transactions runs, sleep randomly from 5 to 15 seconds.
                .set_wait_time(Duration::from_secs(5), Duration::from_secs(15))?
                // This transaction only runs one time when the user first starts.
                .register_transaction(transaction!(website_login).set_on_start())
                // These next two transactions run repeatedly as long as the load test is running.
                .register_transaction(transaction!(website_index))
                .register_transaction(transaction!(website_about)),
        )
        .execute()
        .await?;

    Ok(())
}

/// Demonstrates how to log in when a user starts. We flag this transaction as an
/// on_start transaction when registering it above. This means it only runs one time
/// per user, when the user thread first starts.
async fn website_login(user: &mut GooseUser) -> TransactionResult {
    let params = [("username", "test_user"), ("password", "")];
    let _goose = user.post_form("/login", &params).await?;

    Ok(())
}

/// A very simple transaction that simply loads the front page.
async fn website_index(user: &mut GooseUser) -> TransactionResult {
    let _goose = user.get("/").await?;

    Ok(())
}

/// A very simple transaction that simply loads the about page.
async fn website_about(user: &mut GooseUser) -> TransactionResult {
    let _goose = user.get("/about/").await?;

    Ok(())
}

Closure Example

The examples/closure.rs example loads three different pages on a web site. Instead of defining a hard coded Transaction function for each, the paths are passed in via a vector and the TransactionFunction is dynamically created in a closure.

Details

The paths to be loaded are first defiend in a vector:

#![allow(unused)]
fn main() {
    let paths = vec!["/", "/about", "/our-team"];
}

A transaction function for each path is then dynamically created as a closure:

    for request_path in paths {
        let path = request_path;

        let closure: TransactionFunction = Arc::new(move |user| {
            Box::pin(async move {
                let _goose = user.get(path).await?;

                Ok(())
            })
        });

Complete Source Code

//! Simple Goose load test example using closures.
//!
//! ## License
//!
//! Copyright 2020 Fabian Franz
//!
//! Licensed under the Apache License, Version 2.0 (the "License");
//! you may not use this file except in compliance with the License.
//! You may obtain a copy of the License at
//!
//! <http://www.apache.org/licenses/LICENSE-2.0>
//!
//! Unless required by applicable law or agreed to in writing, software
//! distributed under the License is distributed on an "AS IS" BASIS,
//! WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//! See the License for the specific language governing permissions and
//! limitations under the License.

use goose::prelude::*;
use std::boxed::Box;
use std::sync::Arc;
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    let mut scenario = scenario!("WebsiteUser")
        // After each transaction runs, sleep randomly from 5 to 15 seconds.
        .set_wait_time(Duration::from_secs(5), Duration::from_secs(15))?;

    let paths = vec!["/", "/about", "/our-team"];
    for request_path in paths {
        let path = request_path;

        let closure: TransactionFunction = Arc::new(move |user| {
            Box::pin(async move {
                let _goose = user.get(path).await?;

                Ok(())
            })
        });

        let transaction = Transaction::new(closure);
        // We need to do the variable dance as scenario.register_transaction returns self and hence moves
        // self out of `scenario`. By storing it in a new local variable and then moving it over
        // we can avoid that error.
        let new_scenario = scenario.register_transaction(transaction);
        scenario = new_scenario;
    }

    GooseAttack::initialize()?
        // In this example, we only create a single scenario, named "WebsiteUser".
        .register_scenario(scenario)
        .execute()
        .await?;

    Ok(())
}

Session Example

The examples/session.rs example demonstrates how you can add JWT authentication support to your load test, making use of the GooseUserData marker trait. In this example, the session is recorded in the GooseUser object with set_session_data, and retrieved with get_session_data_unchecked.

Details

In this example, the GooseUserData is a simple struct containing a string:

#![allow(unused)]
fn main() {
struct Session {
    jwt_token: String,
}
}

The session data structure is created from json-formatted response data returned by an authentication request, uniquely stored in each GooseUser instance:

    user.set_session_data(Session {
        jwt_token: response.jwt_token,
    });

The session data is retrieved from the GooseUser object with each subsequent request. To keep the example simple no validation is done:

    // This will panic if the session is missing or if the session is not of the right type.
    // Use `get_session_data` to handle a missing session.
    let session = user.get_session_data_unchecked::<Session>();

    // Create a Reqwest RequestBuilder object and configure bearer authentication when making
    // a GET request for the index.
    let reqwest_request_builder = user
        .get_request_builder(&GooseMethod::Get, "/")?
        .bearer_auth(&session.jwt_token);

This example will panic if you run it without setting up a proper load test environment that actually sets the expected JWT token.

Complete Source Code

//! Goose load test example, leveraging the per-GooseUser `GooseUserData` field
// to store a per-user session JWT authentication token.
//!
//! ## License
//!
//! Copyright 2020-2022 Jeremy Andrews
//!
//! Licensed under the Apache License, Version 2.0 (the "License");
//! you may not use this file except in compliance with the License.
//! You may obtain a copy of the License at
//!
//! <http://www.apache.org/licenses/LICENSE-2.0>
//!
//! Unless required by applicable law or agreed to in writing, software
//! distributed under the License is distributed on an "AS IS" BASIS,
//! WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//! See the License for the specific language governing permissions and
//! limitations under the License.

use goose::prelude::*;
use serde::Deserialize;
use std::time::Duration;

struct Session {
    jwt_token: String,
}

#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct AuthenticationResponse {
    jwt_token: String,
}

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    GooseAttack::initialize()?
        // In this example, we only create a single scenario, named "WebsiteUser".
        .register_scenario(
            scenario!("WebsiteUser")
                // After each transaction runs, sleep randomly from 5 to 15 seconds.
                .set_wait_time(Duration::from_secs(5), Duration::from_secs(15))?
                // This transaction only runs one time when the user first starts.
                .register_transaction(transaction!(website_signup).set_on_start())
                // These next two transactions run repeatedly as long as the load test is running.
                .register_transaction(transaction!(authenticated_index)),
        )
        .execute()
        .await?;

    Ok(())
}

/// Demonstrates how to log in and set a session when a user starts. We flag this transaction as an
/// on_start transaction when registering it above. This means it only runs one time
/// per user, when the user thread first starts.
async fn website_signup(user: &mut GooseUser) -> TransactionResult {
    let params = [("username", "test_user"), ("password", "")];
    let response = match user.post_form("/signup", &params).await?.response {
        Ok(r) => match r.json::<AuthenticationResponse>().await {
            Ok(j) => j,
            Err(e) => return Err(Box::new(e.into())),
        },
        Err(e) => return Err(Box::new(e.into())),
    };

    user.set_session_data(Session {
        jwt_token: response.jwt_token,
    });

    Ok(())
}

/// A very simple transaction that simply loads the front page.
async fn authenticated_index(user: &mut GooseUser) -> TransactionResult {
    // This will panic if the session is missing or if the session is not of the right type.
    // Use `get_session_data` to handle a missing session.
    let session = user.get_session_data_unchecked::<Session>();

    // Create a Reqwest RequestBuilder object and configure bearer authentication when making
    // a GET request for the index.
    let reqwest_request_builder = user
        .get_request_builder(&GooseMethod::Get, "/")?
        .bearer_auth(&session.jwt_token);

    // Add the manually created RequestBuilder and build a GooseRequest object.
    let goose_request = GooseRequest::builder()
        .set_request_builder(reqwest_request_builder)
        .build();

    // Make the actual request.
    user.request(goose_request).await?;

    Ok(())
}

Drupal Memcache Example

The examples/drupal_memcache.rs example is used to validate the performance of each release of the Drupal Memcache Module.

Background

Prior to every release of the Drupal Memcache Module, Tag1 Consulting has run a load test to ensure consistent performance of the module which is dependend on by tens of thousands of Drupal websites.

The load test was initially implemented as a JMeter testplan. It was later converted to a Locust testplan. Most recently it was converted to a Goose testplan.

Thie testplan is maintained as a simple real-world Goose load test example.

Details

The authenticated GooseUser is labeled as AuthBrowsingUser and demonstrates logging in one time at the start of the load test:

            scenario!("AuthBrowsingUser")
                .set_weight(1)?
                .register_transaction(
                    transaction!(drupal_memcache_login)
                        .set_on_start()
                        .set_name("(Auth) login"),
                )

Each GooseUser thread logs in as a random user (depending on a properly configured test environment):

                    let uid: usize = rand::rng().random_range(3..5_002);
                    let username = format!("user{uid}");
                    let params = [
                        ("name", username.as_str()),
                        ("pass", "12345"),
                        ("form_build_id", &form_build_id[1]),
                        ("form_id", "user_login"),
                        ("op", "Log+in"),
                    ];
                    let _goose = user.post_form("/user", &params).await?;
                    // @TODO: verify that we actually logged in.
                }

The test also includes an example of how to post a comment during a load test:

                .register_transaction(
                    transaction!(drupal_memcache_post_comment)
                        .set_weight(3)?
                        .set_name("(Auth) comment form"),
                ),

Note that much of this functionality can be simplified by using the Goose Eggs library which includes some Drupal-specific functionality.

Complete Source Code

//! Conversion of Locust load test used for the Drupal memcache module, from
//! <https://github.com/tag1consulting/drupal-loadtest/>
//!
//! To run, you must set up the load test environment as described in the above
//! repository, and then run the example. You'll need to set --host and may want
//! to set other command line options as well, starting with:
//!      cargo run --release --example drupal_memcache --
//!
//! ## License
//!
//! Copyright 2020-2022 Jeremy Andrews
//!
//! Licensed under the Apache License, Version 2.0 (the "License");
//! you may not use this file except in compliance with the License.
//! You may obtain a copy of the License at
//!
//! <http://www.apache.org/licenses/LICENSE-2.0>
//!
//! Unless required by applicable law or agreed to in writing, software
//! distributed under the License is distributed on an "AS IS" BASIS,
//! WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//! See the License for the specific language governing permissions and
//! limitations under the License.

use goose::prelude::*;

use rand::Rng;
use regex::Regex;

#[tokio::main]
async fn main() -> Result<(), GooseError> {
    GooseAttack::initialize()?
        .register_scenario(
            scenario!("AnonBrowsingUser")
                .set_weight(4)?
                .register_transaction(
                    transaction!(drupal_memcache_front_page)
                        .set_weight(15)?
                        .set_name("(Anon) front page"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_node_page)
                        .set_weight(10)?
                        .set_name("(Anon) node page"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_profile_page)
                        .set_weight(3)?
                        .set_name("(Anon) user page"),
                ),
        )
        .register_scenario(
            scenario!("AuthBrowsingUser")
                .set_weight(1)?
                .register_transaction(
                    transaction!(drupal_memcache_login)
                        .set_on_start()
                        .set_name("(Auth) login"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_front_page)
                        .set_weight(15)?
                        .set_name("(Auth) front page"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_node_page)
                        .set_weight(10)?
                        .set_name("(Auth) node page"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_profile_page)
                        .set_weight(3)?
                        .set_name("(Auth) user page"),
                )
                .register_transaction(
                    transaction!(drupal_memcache_post_comment)
                        .set_weight(3)?
                        .set_name("(Auth) comment form"),
                ),
        )
        .execute()
        .await?;

    Ok(())
}

/// View the front page.
async fn drupal_memcache_front_page(user: &mut GooseUser) -> TransactionResult {
    let mut goose = user.get("/").await?;

    match goose.response {
        Ok(response) => {
            // Copy the headers so we have them for logging if there are errors.
            let headers = &response.headers().clone();
            match response.text().await {
                Ok(t) => {
                    let re = Regex::new(r#"src="(.*?)""#).unwrap();
                    // Collect copy of URLs to run them async
                    let mut urls = Vec::new();
                    for url in re.captures_iter(&t) {
                        if url[1].contains("/misc") || url[1].contains("/themes") {
                            urls.push(url[1].to_string());
                        }
                    }
                    for asset in &urls {
                        let _ = user.get_named(asset, "static asset").await;
                    }
                }
                Err(e) => {
                    // This will automatically get written to the error log if enabled, and will
                    // be displayed to stdout if `-v` is enabled when running the load test.
                    return user.set_failure(
                        &format!("front_page: failed to parse page: {e}"),
                        &mut goose.request,
                        Some(headers),
                        None,
                    );
                }
            }
        }
        Err(e) => {
            // This will automatically get written to the error log if enabled, and will
            // be displayed to stdout if `-v` is enabled when running the load test.
            return user.set_failure(
                &format!("front_page: no response from server: {e}"),
                &mut goose.request,
                None,
                None,
            );
        }
    }

    Ok(())
}

/// View a node from 1 to 10,000, created by preptest.sh.
async fn drupal_memcache_node_page(user: &mut GooseUser) -> TransactionResult {
    let nid = rand::rng().random_range(1..10_000);
    let _goose = user.get(format!("/node/{}", &nid).as_str()).await?;

    Ok(())
}

/// View a profile from 2 to 5,001, created by preptest.sh.
async fn drupal_memcache_profile_page(user: &mut GooseUser) -> TransactionResult {
    let uid = rand::rng().random_range(2..5_001);
    let _goose = user.get(format!("/user/{}", &uid).as_str()).await?;

    Ok(())
}

/// Log in.
async fn drupal_memcache_login(user: &mut GooseUser) -> TransactionResult {
    let mut goose = user.get("/user").await?;

    match goose.response {
        Ok(response) => {
            // Copy the headers so we have them for logging if there are errors.
            let headers = &response.headers().clone();
            match response.text().await {
                Ok(html) => {
                    let re = Regex::new(r#"name="form_build_id" value=['"](.*?)['"]"#).unwrap();
                    let form_build_id = match re.captures(&html) {
                        Some(f) => f,
                        None => {
                            // This will automatically get written to the error log if enabled, and will
                            // be displayed to stdout if `-v` is enabled when running the load test.
                            return user.set_failure(
                                "login: no form_build_id on page: /user page",
                                &mut goose.request,
                                Some(headers),
                                Some(&html),
                            );
                        }
                    };

                    // Log the user in.
                    let uid: usize = rand::rng().random_range(3..5_002);
                    let username = format!("user{uid}");
                    let params = [
                        ("name", username.as_str()),
                        ("pass", "12345"),
                        ("form_build_id", &form_build_id[1]),
                        ("form_id", "user_login"),
                        ("op", "Log+in"),
                    ];
                    let _goose = user.post_form("/user", &params).await?;
                    // @TODO: verify that we actually logged in.
                }
                Err(e) => {
                    // This will automatically get written to the error log if enabled, and will
                    // be displayed to stdout if `-v` is enabled when running the load test.
                    return user.set_failure(
                        &format!("login: unexpected error when loading /user page: {e}"),
                        &mut goose.request,
                        Some(headers),
                        None,
                    );
                }
            }
        }
        // Goose will catch this error.
        Err(e) => {
            // This will automatically get written to the error log if enabled, and will
            // be displayed to stdout if `-v` is enabled when running the load test.
            return user.set_failure(
                &format!("login: no response from server: {e}"),
                &mut goose.request,
                None,
                None,
            );
        }
    }

    Ok(())
}

/// Post a comment.
async fn drupal_memcache_post_comment(user: &mut GooseUser) -> TransactionResult {
    let nid: i32 = rand::rng().random_range(1..10_000);
    let node_path = format!("node/{}", &nid);
    let comment_path = format!("/comment/reply/{}", &nid);

    let mut goose = user.get(&node_path).await?;

    match goose.response {
        Ok(response) => {
            // Copy the headers so we have them for logging if there are errors.
            let headers = &response.headers().clone();
            match response.text().await {
                Ok(html) => {
                    // Extract the form_build_id from the user login form.
                    let re = Regex::new(r#"name="form_build_id" value=['"](.*?)['"]"#).unwrap();
                    let form_build_id = match re.captures(&html) {
                        Some(f) => f,
                        None => {
                            // This will automatically get written to the error log if enabled, and will
                            // be displayed to stdout if `-v` is enabled when running the load test.
                            return user.set_failure(
                                &format!("post_comment: no form_build_id found on {}", &node_path),
                                &mut goose.request,
                                Some(headers),
                                Some(&html),
                            );
                        }
                    };

                    let re = Regex::new(r#"name="form_token" value=['"](.*?)['"]"#).unwrap();
                    let form_token = match re.captures(&html) {
                        Some(f) => f,
                        None => {
                            // This will automatically get written to the error log if enabled, and will
                            // be displayed to stdout if `-v` is enabled when running the load test.
                            return user.set_failure(
                                &format!("post_comment: no form_token found on {}", &node_path),
                                &mut goose.request,
                                Some(headers),
                                Some(&html),
                            );
                        }
                    };

                    let re = Regex::new(r#"name="form_id" value=['"](.*?)['"]"#).unwrap();
                    let form_id = match re.captures(&html) {
                        Some(f) => f,
                        None => {
                            // This will automatically get written to the error log if enabled, and will
                            // be displayed to stdout if `-v` is enabled when running the load test.
                            return user.set_failure(
                                &format!("post_comment: no form_id found on {}", &node_path),
                                &mut goose.request,
                                Some(headers),
                                Some(&html),
                            );
                        }
                    };
                    // Optionally uncomment to log form_id, form_build_id, and form_token, together with
                    // the full body of the page. This is useful when modifying the load test.
                    /*
                    user.log_debug(
                        &format!(
                            "form_id: {}, form_build_id: {}, form_token: {}",
                            &form_id[1], &form_build_id[1], &form_token[1]
                        ),
                        Some(&goose.request),
                        Some(headers),
                        Some(&html),
                    );
                    */

                    let comment_body = "this is a test comment body";
                    let params = [
                        ("subject", "this is a test comment subject"),
                        ("comment_body[und][0][value]", comment_body),
                        ("comment_body[und][0][format]", "filtered_html"),
                        ("form_build_id", &form_build_id[1]),
                        ("form_token", &form_token[1]),
                        ("form_id", &form_id[1]),
                        ("op", "Save"),
                    ];

                    // Post the comment.
                    let mut goose = user.post_form(&comment_path, &params).await?;

                    // Verify that the comment posted.
                    match goose.response {
                        Ok(response) => {
                            // Copy the headers so we have them for logging if there are errors.
                            let headers = &response.headers().clone();
                            match response.text().await {
                                Ok(html) => {
                                    if !html.contains(comment_body) {
                                        // This will automatically get written to the error log if enabled, and will
                                        // be displayed to stdout if `-v` is enabled when running the load test.
                                        return user.set_failure(
                                            &format!("post_comment: no comment showed up after posting to {}", &comment_path),
                                            &mut goose.request,
                                            Some(headers),
                                            Some(&html),
                                        );
                                    }
                                }
                                Err(e) => {
                                    // This will automatically get written to the error log if enabled, and will
                                    // be displayed to stdout if `-v` is enabled when running the load test.
                                    return user.set_failure(
                                        &format!(
                                            "post_comment: unexpected error when posting to {}: {}",
                                            &comment_path, e
                                        ),
                                        &mut goose.request,
                                        Some(headers),
                                        None,
                                    );
                                }
                            }
                        }
                        Err(e) => {
                            // This will automatically get written to the error log if enabled, and will
                            // be displayed to stdout if `-v` is enabled when running the load test.
                            return user.set_failure(
                                &format!(
                                    "post_comment: no response when posting to {}: {}",
                                    &comment_path, e
                                ),
                                &mut goose.request,
                                None,
                                None,
                            );
                        }
                    }
                }
                Err(e) => {
                    // This will automatically get written to the error log if enabled, and will
                    // be displayed to stdout if `-v` is enabled when running the load test.
                    return user.set_failure(
                        &format!("post_comment: no text when loading {}: {}", &node_path, e),
                        &mut goose.request,
                        None,
                        None,
                    );
                }
            }
        }
        Err(e) => {
            // This will automatically get written to the error log if enabled, and will
            // be displayed to stdout if `-v` is enabled when running the load test.
            return user.set_failure(
                &format!(
                    "post_comment: no response when loading {}: {}",
                    &node_path, e
                ),
                &mut goose.request,
                None,
                None,
            );
        }
    }

    Ok(())
}

Umami Example

The examples/umami example load tests the Umami demonstration profile included with Drupal 9.

Overview

The Drupal Umami demonstration profile generates an attractive and realistic website simulating a food magazine, offering a practical example of what Drupal is capable of. The demo site is multi-lingual and has quite a bit of content, multiple taxonomies, and much of the rich functionality you'd expect from a real website, making it a good load test target.

The included example simulates three different types of users: an anonymous user browsing the site in English, an anonymous user browsing the site in Spanish, and an administrative user that logs into the site. The two anonymous users visit every page on the site. For example, the anonymous user browsing the site in English loads the front page, browses all the articles and the article listings, views all the recipes and recipe listings, accesses all nodes directly by node id, performs searches using terms pulled from actual site content, and fills out the site's contact form. With each action performed, Goose validates the HTTP response code and inspects the HTML returned to confirm that it contains the elements we expect.

Read the blog A Goose In The Clouds: Load Testing At Scale for a demonstration of using this example, and learn more about the testplan from the README.

Alternative

The Goose Eggs library contains a variation of the Umami example.

Complete Source Code

This example is more complex than the other examples, and is split into multiple files, all of which can be found within examples/umami.