Industry attributes
Other attributes
Test automation is the practice of using software to automatically review and validate a different software product, ensuring it meets predefined quality standards for code style, functionality, and user experience. Finding defects in software requires systematic testing using various test cases to accurately assess the correctness of the observed behavior. Test automation manages much of this process, performing tests and overseeing test data to produce results that improve the final software quality. It is often a quality assurance measure but involves the entire software production team.
Automated testing differs from manual testing. While manual testing (e.g., discovery testing, usability testing, etc.) can be valuable, other types of testing (e.g., regression testing, functional testing, etc.) require significant repetition that lends itself to test automation. Manual testing requires physical time and effort to ensure the software does everything it’s supposed to. In addition, manual testers have to make a record of their findings. This involves checking log files, external services, and the database for errors. Test automation utilizes tools to remove the time developers spend testing software functionality, allowing them to spend time on higher value taskings, such as exploratory tests.
For a test to be successfully automated, it needs to meet certain criteria:
The test must require repeated use, often with the following three steps:
- Set up the test, including data and a consistent environment
- Execute the function and measure the result
- Clean up the data and environment
A determinant function produces the same outcome every time given the same input. Software often uses a very large number of variable inputs or randomly generated variables, making it difficult to have the same result over time. It is possible to compensate for this using test inputs through a test harness.
Automated testing cannot account for subjective opinions. Usability or beta testing should be based on user feedback and not automated.
Software testing can be separated into various levels, including the widely used three-level model:
- Unit test level
- Functional tests level (known as the service layer) (non-UI)
- UI tests level
Unit testing or module testing is the lowest level, where the performance of individual components within the project is checked. Often this level utilizes both automated testing and custom manual testing written by developers, which can allow testers to verify the code. Unit testing early and throughout the project helps to fix bugs and protect the project from serious problems. In modern software development, almost all unit testing is fully automated.
The entirety of an application cannot be tested using the UI layer, and therefore testing teams require direct access to the functional layer to test its business logic. This is also referred to as acceptance testing.
UI tests allow both the user interface and functionality to be tested by performing operations that stimulate the business logic of the app. These end-to-end tests are more effective than the previous automation layer because the latter just tests functionality by simulating the end user’s behavior with the involvement of the UI.
There are many different types of code analysis tools, including both static analysis and dynamic analysis. These tests can look for various defects, such as security flaws or issues with the code's style and form. Generally, the developers configure rules and keep the tools up to date, with little test writing.
Unit tests are designed to test a single function, or unit, in isolation. Typically, unit tests run on a build server and they don’t depend on databases, external APIs, or file storage. They need to be fast and are designed to test the code only, not the external dependencies.
An integration test, sometimes called an end-to-end test, needs to interact with external dependencies, making them more complicated to set up. Often the best approach is to create fake external resources. For example, a test for a logistics app that depends on a web service from a vendor may fail unexpectedly if the vendor’s service is down. With external dependencies, users do not have enough control for the entire test environment to create each scenario explicitly.
Many practices today use automated acceptance tests (AAT), which are similar to behavior-driven development (BDD). They both follow the same practice of creating the acceptance test before the feature is developed. AATs run to determine if the feature delivers what’s been agreed upon. Therefore, critical developers, the business, and QA write these tests together. They serve as regression tests in the future, and they ensure that the feature holds up to what’s expected.
Without AATs in place, teams have to write regression tests after the fact. Both are forms of functional tests, but how they are written, when they are written, and whom they are written by are vastly different. Like AATs, they can be driven through an API by code or a UI. Tools exist to write these tests using a GUI.
Various performance tests exist, but they all test an aspect of the application’s performance. Sometimes these tests require emulating a significant number of users. Cloud resources are available to help with this kind of testing, but it’s possible to use on-premises resources as well.
A smoke test is a basic test that’s usually performed after a deployment or maintenance window. The purpose of a smoke test is to ensure all services and dependencies are up and running. A smoke test isn’t meant to be an all-out functional test. It can be run as part of an automated deployment or triggered through a manual step.