To Write Reliable Tests, think FIRST

To Write Reliable Tests, think FIRST

Fast, Independent, Repeatable, Self-Validating, Timely & Thorough

·

5 min read

FIRST and foremost

As discussed in last week's article, Arrange-Act-Assert provides the foundation for our tests and Atomic test principles help us write focused tests. While FIRST testing principles are often applied to unit testing, end-to-end tests can also benefit from these principles with some slight adaptation.

To continue making our tests easier to use and maintain, we should make our tests:

  • Fast

  • Independent

  • Repeatable

  • Self-validating

  • Timely & Thorough

Fast

To make our tests fast, start with the simplest method. Run fewer tests! The fastest tests are the tests you don't run.

  • Does every regression test provide information related to the features you are currently testing? Find ways to reduce the amount of tests you need to run to test a particular feature or code change.

Unit tests are faster than integration tests, which are both faster than end-to-end tests. In general, your team should favor faster performing automated tests at the level that will allow them to run the fastest.

  • Since end-to-end tests are typically slower than unit tests or integration tests, we should make efforts to optimize the speed of our end-to-end tests. This might involve minimizing unnecessary interactions with external systems, using headless browsers to speed up UI interactions, or running tests in parallel.

Build awareness of the full scope of tests your team's developers, QAs, and other members are already doing. This will help you to reduce duplication, identify gaps, and work together to plan and implement tests at the appropriate level.

  • For example, boundary cases such as maximum and minimum values for form input are much faster to test as unit tests or API integration tests. At the end-to-end level, one might want to know that a form limits input to an acceptable range, but the boundary values that determine the acceptable range should be tested at a lower level of integration.

    Speaking of testing form inputs, can you play negative Powerball numbers? I don't think you can, but the Texas Lottery website lets you select them...

    Texas Lottery Powerball number check bug as of December 6, 2023

Independent

Independent tests:

  • can run in any order

  • do not set up test data or conditions that other tests rely on

To design independent tests, use the Arrange-Act-Assert pattern.

There are times that we may not want to run a test unless another test passes, but we should generally be able to run any test independent of the results of any other test. In that sense, every test in a group might rely on the same "Arrange" step to set up a certain condition, but after that, each test would have its own actions and assertions.

  • For example, we may only want to run tests to modify a user's account if we can log into the user's account. Once logged in, however, each change made to the user's account information (home address, phone number, etc.) should be independent of any other change.

Repeatable

Repeatable tests are "deterministic," which is a formal way of saying that, given the same initial conditions, tests should always return the same results. For example, flaky tests violate this rule by returning inconsistent test results.

The "Arrange" step of the Arrange-Act-Assert pattern is critical to making steps repeatable.

For example, suppose that your development team has several test environments that it uses to test code before releasing it to production. In this situation, common challenges to making tests repeatable might arise from:

  • maintaining consistent test data across all test environments

  • dealing with performance differences between test and production environments, such as longer wait times or slower performance under load

Working with developers and DevOps team members to create consistent test environments and test data is an investment that will help to avoid headaches in the long run.

Self-Validating

Have you ever seen an end-to-end test that you had to watch to know whether it passed or failed? Or reviewed a "test" that had no assertions in it? Perhaps people are citing the same automated test to validate several previously manual test cases, but from reading the test report it isn't clear what happened when to validate each test case. Those would be examples of what not to do!

A self-validating test is a test that:

  • Has an assertion

  • The test passes or fails based on the results of the assertion

  • The test contains all of the information necessary to determine if the test passed or failed

    • It should be clear from reading the test report:

      • what was tested

      • if the test passed or failed

      • what caused the test to pass or fail

    • to make test results, particularly failures, easier to understand:

      • add screenshots to your test reports

      • consider catching exceptions during test steps that may fail so that you can add customized messages explaining the failure to the exception

Timely & Through

While implementing tests before developers write code can be a challenge for end-to-end tests, the usefulness of tests increases the more frequently they are run and the faster the test results are provided to people who will take action.

To make end-to-end tests more timely, automating tests as early as possible is vital to keeping regression tests up to date. Once tests are automated, incorporating optimized end-to-end tests into CI/CD systems is an ideal way to provide fast feedback to developers and prevent bugs earlier in the development cycle.

Compared to other types of testing, however, end-to-end tests shine in how thoroughly they can test the behavior of an application from a user's perspective. By reviewing logs and product analytics to understand the most visited workflows of an application, we can write tests for the application flows that users rely on the most.

Since ownership of logs and product analytics is often spread among engineers, designers, marketers, and product managers, bringing these groups together to help you write more valuable tests also helps to ignite interest in QA throughout your organization. Emerging tools such as ProdPerfect and Katalon TrueTest promise to make this process easier, but even common tools such as Google Analytics can help you get started!