Software Testing Types

After decades of working in I.T. in various quality assurance and test management areas, I can say that although software testing hasn’t changed in its concepts that hugely, the labels for types of testing have altered alongside our understanding of development lifecycles.

When going into high-level discussions with stakeholders or business owners who wonder why something as “simple as testing” can take so long, I would not advise peppering the conversation with these technicalities. But many of us have had meetings where terms like “continuous testing” or something similar suddenly become tabled for discussion because somebody heard about it via the project.

Therefore, ensuring that your testing teams have a common lexicon or knowledgebase of the testing types (and names they are using) inside of project or business-based work will be one massive kick towards success.

Depending on your product or service, the testing job may require a variety of skills and functions. Some phases of development may be tested and managed in-house, some outsourced to development and testing teams.

Below is a simplistic list of some – not all – of the various types of testing which may form part of your quality assurance organisation’s lexicon.

White Box Testing

Also known as Glass Box Testing. Typically done by developers or the programming team. Internal software and code logic should be known for performing this type of testing. Tests are based on the coverage of code statements, branches, paths, conditions etc. Unit testing is one form of white box testing, as is branch or conditional testing. Automated scripts and peer checking can be used here.

Black Box Testing

Several test types fall under Black Box Testing. Black Box means internal system design is not considered in this testing (provided it works).
Tests are based on the requirements and functionality. System testers, and business or users can be involved in black box.

Gray Box Testing

Gray Box Testing is obviously a combination of black and white box testing, or in other words, testing a system against specifications but with knowledge of internal systems. It shouldn’t be necessary anymore as the I.T. development world matures, to point out that quality assurance should include both black and white box approaches, both of which can be tested by developers or system testers.

Functional Testing

This type of testing ignores the internal parts and focuses only on the output to check if it is as per the requirement or not. It is a Black-box type testing geared to the functional requirements of an application. For detailed information about Functional Testing click here.

Functional testing types:

Non-functional Testing

Non-functional testing does not check requirements. It validates a system’s technical performance such as load or security. Some can be tested by technical teams, and some can be automated or outsourced. There should still be non-functional system requirements used within a project.

Non-functional testing types include:

A to Z’s of Testing Types

Alpha Testing

Alpha testing is carried out at the end of the software development phase at the developer’s site / in a QA or Test Environment which will mimic the live environment. It is done before the Beta Testing. It is typically tested by experienced system testers alongside business users representing customers and their requirements.

Alpha testing is to find all possible issues and defects before releasing to live. The outcome may require some changes to user requirements and redevelopment work.

Acceptance Testing

Also called User Acceptance Testing (UAT). Sitting as the last phase of testing, an acceptance test is performed by the business client and verifies whether the end to end flow of the system fits business requirements and meets the needs of the end user. If accepted, the software is delivered into production, where some more (confirmation) tests may be done.

Ad-hoc Testing

Ad-hoc testing is performed without a reference test case or plans or requirement specifications, with the objective of finding functional defects. It can be used as part of system testing, or by anyone involved with the product through development or live lifecycles.

Accessibility Testing

The aim of accessibility testing is to determine whether the software or application is accessible for disabled peoples. Here disability means hearing, sight or mental disabilities, old age and other disabled groups. Various checks are performed on the user interface such as font size for visually disabled, color and contrast for color blindness etc. Various governances such as W3C are available world-wide for acceptable accessibility design. Accessibility comes under Compliance Testing.

Agile Testing

Software testing practice that follows the principles of the agile manifesto, emphasising testing from the perspective of customers who will use the system.  QA Testers will often test directly from use-cases in context-driven testing.

In an Agile development environment, testing is an integral part of software development and is done along with coding. Agile testing allows incremental and iterative coding and testing. Testing can be quite technical, but QA testers must keep the perspective of customers who will use the system. To do this, iteratively, testers will often test directly from use-cases in context-driven or scenario-based testing once the front end or GUI is stable enough.

Automated Testing

This is a testing approach that makes use of testing tools and/or programming to run the test cases using software or custom developed test utilities.  Development has been using automated tools for decades now, but only over the past couple of decades have system test automation stabilied for easier use from technical testers or automation engineers.

Most of the automated tools provided capture and playback facility, however, there are tools that require copious scripting or programming to automate test cases. These tools are also extensively used in performance testing, and can be expensive to own and maintain, hence the growing usage of outsourcing of automation.

Test cases which have been automated may form a regression testing suite which should be treated as a project and customer asset, and maintained going forward.

Beta Testing

Beta Testing is a formal type of software testing which is carried out by the customer or end user. It is performed in a contained Live or real environment before releasing the product to the market for the actual end users.

Beta testing is carried out to ensure that there are no major failures in the software or product and it satisfies the business requirements from an end-user perspective. Beta testing is successful when the customer accepts the software.

Back-end Testing

Another name for back-end testing is database testing, however interfaces between various data points may also be included. The user interface or GUI (graphical user interface) is not involved. Testers are directly connected to the database and may run queries to identify data loss or corruption. For movement between databases tools may be used to drive data through for further tests. Back-end testing is typically performed by development and technical system testers.

Breadth Testing

Breadth testing uses a test suite of test cases to test across the breadth of an application, checking functionality but not into detail. Relevant to system or regression testing.

Browser Compatibility Testing

A type of Compatibility Testing performed by the testing team for web and/or cloud-based applications. Different combinations of browsers, computer units (PCs versus Macs) and operating systems must be tested to validate that the application will run for all potential customers.

Backward Compatibility Testing

Backward Compatibility Testing checks whether the new version of the software works properly with the older versions and formats of files, data tables and structure created by previous versions of the software. This testing is typically done by testers in upgrade projects. These are functional tests as a subset of regression testing to confirm the application upgrade does not regress the system.

Boundary Value Testing

Boundary Value Testing is performed for checking if defects exist at boundary values which are ranges of numbers. Should be tested in Unit and System Testing phases to confirm.

Eg. If testing requires a test range of numbers from 1 to 500 then testing is performed on values of 0, 1, 2, 499, 500 and 501.

Branch Testing

A white box testing method carried out during unit testing, ensuring code is tested thoroughly by traversing at every branch.

Code-Driven Testing

Code-driven testing sits within unit testing and is done by development or programmers, using testing frameworks (like xUnit) to confirm units of code. It can be used in peer-checking of code.

Comparison Testing

Comparison of a product’s strength and weaknesses with its previous versions or other similar products is termed as Comparison Testing. Typically this is organised by business or marketing (this can be called A/B Testing – or in marketing parlance, Split Testing) using some manually documented performance markers. Sometimes, where projects have access to run the full multiple versions of the product in concurrent full test environments comparison testing may be an I.T. requirement.

Compatibility Testing

Compatibility Testing validates how software behaves and runs on different environments – storage configurations, different database servers, web servers, hardware, and networks and customer hardware, web browsers and their versions, meeting any compliancy and governance rules such as accessability. Compatibility testing is performed by the testing team, often as a non-functional requirement (which sits across all products and applications).

Compliance Testing

Compliance Testing tests that the system meets standards, procedures and guidelines set by the organisation and specific external governances and legalities. For instance, Banks and Financial Institutions have many compliancy requirements regarding data storage and sharing. Most web-facing businesses must meet due diligence over privacy of client information. Accessibility testing is required internationally for any front-facing and web-based applications, as another example.

Component Testing

Performed by developers after the completion of unit testing. Component Testing involves testing of multiple functionalities as a single code after single code units are integrated.

Configuration Testing

Determines the minimal and optimal configuration of hardware and software platforms, and the effect of adding or modifying resources such as memory or CPU. This is part of performance test suites, and tested by technical or functional testers in cohesion with architecture teams.

Context-driven Testing

An Agile Testing technique that advocates continuous and creative evaluation of testing opportunities in light of the delivery of new usecases or development. It is usually performed by Agile testing teams, but is difficult to define as it sits within the exploratory or less-documented area of testing, and is normally more successful with experienced testers working closely within development.

Continuous Testing

Continuous testing is a form of testing which tends to be automated. It sits well with development but also system test teams can use automated testcase packs to run through regression testing or confirm sanity testing.

Domain Testing

A whitebox testing technique to confirm an application only accepts valid input. Sometimes called or associated to Negative Testing and/or Error-handling testing. Testers may use fault-injection testing techniques.

Dynamic Testing

The obvious opposite to static testing (which is testing documents), dynamic testing is a phrase sometimes used to denote all types of testing which checks the code, system or UI towards functionality, so black box, white box, unit, system, performance, the whole works.

End-to-End Testing

End-to-end testing involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It can often be run concurrently in a SIT System Integration Testing phase.

Equivalence Partitioning

This is a testing technique used in Black Box Testing where a set of values are selected to generate mimimal testcases and to remove redundant ones to test the same data ranges. Trained system testers tend to create these.

Endurance Testing

Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers. Can be called or tested alongside different usecases for Reliability Testing. When combined long periods with load, the term is Soak Testing.

Error-handling Testing

Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams to confirm full front-end messages and data inputs. Sometimes called or associated to Negative Testing and/or Domain testing.

Example Testing

Example Testing is real-time testing. Example testing can include following planned scenario testcases and/or the scenarios or Use-Cases based on the experience of the testers – which can include some ad-hoc testing.

Exploratory Testing

Exploratory Testing is informal testing performed by the testing team. The objective of this testing is to explore the application to find defects that exist in the application. It may be done at the start of an upgrade or redevelopment project to get an understanding of the current state of the application or during development. Like ad-hoc testing, no documentation or test cases are used at the start, however it’s recommended that some notes are made during exploratory testing for flow, and repeatability should defects be found.

Fuzz Testing

Fuzz Testing is both a mutation testing white box technique and related critically to security testing. Originally developed by Barton Miller at the University of Wisconsin in 1989. Fuzz testing or “fuzzing” is a Software testing technique using automated or semi-automated testing techniques to drive invalid input and random data into a system to monitor for system exceptions. This is similar to how many hackers experiment for system attacks.

Globalisation Testing

Globalisation Testing confirms that a product localised to one country can also work or convert data and information through to another country or more. This requires processes for dealing with conversion of currencies, numerals and other input values, and with other products and databases which integrate into the system.

With the globalisation of many organisations and products, rolling out systems across the world can also create the need for associated localisation testing at each country’s end.

With web-based systems and cloud-apps etc, these are often held on central owned servers but owners should include Internationalisation Testing which should consider language conversions and different character sets, keyboard configurations etc.

Gorilla/Guerrilla Testing

Gorilla Testing is a testing type performed by a tester and sometimes by developers. It’s actually a mis-spelling from Guerrilla (as in Guerrilla warfare). In Gorilla Testing, one module or the functionality in the module is tested thoroughly and heavily to check the robustness of the application.

Graphical User Interface (GUI) Testing

The objective of this GUI testing is to validate the GUI meets the business requirement. This testing can often be done in certain phases – system testers and business may be lead early on through UI Design Documents and GUI Mockup screens while various testers will test out developed interfaces in later stages.

Strictly speaking, GUI testing should look at button sizes, field characteristics, alignment, tables and content towards functionality. Often things like design, look and feel are also commented on in later more user or customer-orientated acceptance stages.

Happy Path Testing

The objective of Happy Path Testing (sometimes Golden-Path)is to test an application successfully on a positive flow. It does not look for negative or error conditions but confirms new functionality operates sufficiently to obtain the expected output.

Incremental Integration Testing

Incremental Integration Testing is a Bottom-up approach for testing i.e continuous testing of an application when new functionality is added. Application functionality and modules should be independent enough to test separately. This is done by programmers or by testers.

Install/Uninstall Testing

Installation (and un-installation) testing is done on full, partial, or upgraded install/uninstall processes on different operating systems, hardware and software environments.

Integration Testing

Testing of all integrated modules to verify the combined functionality after integration is termed as Integration Testing. This type of testing is especially relevant to client/server and distributed systems.

There are different approaches for Integration testing; namely, Top-down integration testing, Bottom-up integration testing and a combination of these two uncommonly known as Sandwich testing.

Top Down Integration Testing: Testing technique that involves starting at the stop of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented. It is conducted by the testing teams.

Incremental Integration Testing is a bottom-up approach offering continuous testing as new functionality is added. Thed to e concept of Incremental is often applied to integration but is relevant to many types of testing, particularly found in agile testing approaches.

Finally, Integration Testing is often confused with SIT (System Integration Testing). The first is testing at smaller connection levels, which can be done concurrently with other testing such as unit and system test cases. The later is a larger end-to-end test phase which generally runs after the full system is delivered and confirmed in a full supporting test environment.

Interface Testing

Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams.

Load Testing

A non-functional testing type, Load testing is to check how much of load or maximum workload a system can handle without any performance degradation. It is particularly needed where web applications are opened up on one-time offers like sales or form entries where all customers need to access the application in a short time-span.

Load testing is performed using automated tools by technical testers working with architecture teams. Another aspect of these Performance Tests is Scalability.

Localisation Testing

With the internationalism of software development nowadays many applications are rolled out from development elsewhere. Localisation Testing is testing to confirm that the new release works with current systems and has been localised (in backend, UI and data) for local requirements eg. currency, date formats, local milestone dates (such as end of year tax dates) etc. This can be a big project to run and require system testers and user acceptance testing as a minimum, esgpecially if the rollout also includes globalisation testing requirements.

Model-based Testing

The application of Model based design for designing and executing the necessary artifacts to perform software testing. Can be done off-line (with the generation of test suites before testing) or on-line (on-the-fly). Testers involved in MBT should have knowledge in UML (Unified Modelling Language), state charts and finite state machines amongst other concepts.

Monkey Testing

Monkey testing is carried out by a tester assuming that if the monkey uses the application then how random input, values will be entered by the Monkey without any knowledge or understanding of the application. It checks that random inputs will not crash a system.

Mutation Testing

Mutation Testing is a type of white box testing in which the source code of one of the programs is changed and testing verifies whether the existing test cases can identify these defects input into the system. Basically a test of the tests. An associated Security testing technique called Fuzz Testing also qualifies the credibility of the code towards hacking access.

Negative Testing

Also known as “test to fail”. A negative testing technique is performed using incorrect data, invalid data or input to validate that the system behaves as expected and notifies an error of invalid input. May also be considered or associated to Domain Testing and/or Error-handling testing. Testers may use fault-injection testing techniques.

Pair/Peer Testing

Buddy system of testing – one person inputs tests while the other analyses results. Pairs can be a mix of any developers, testers, business or customers working side-by-side.

Parallel Testing

Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly.  Parallel is exactly that – both the old and new applications are run on the same test cases to confirm. Requires seperated test environments.

Passive Testing

Testing technique of monitoring the results of a running system without introducing any special test data.

Penetration Testing

Security Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conducted by specialised penetration testing companies.  A less commonly used name for this is Vulnerability Testing.

Performance Testing

This term is often used interchangeably with ‘stress’ and ‘load’ testing. Performance Testing is done to check whether the system meets the performance requirements. Different performance and load tools are used to do this testing.

Recovery Testing

A type of testing which validates how well the application or system recovers from crashes or disasters. It can be as simple as unplugging a network cable when in the development environment, or be used within a set of testcases for Disaster Recovery and Operations exercises.

Regression Testing

Regression testing takes place when a modification of functionality is made, to check that the system still runs as expected. It is difficult to cover all the system in Regression Testing, so typically automation testing tools are used for these types of testing.

Reliability Testing

Reliability testing combines some performance testing usecases (like stress and load testing) with some functional testing to confirm the system is error free over a period of time. This should be tested to meet service level agreements requirements of the customer. Endurance testing can be associated, where prolonged system use is checked for memory leaks etc.

Requirements Testing

A static testing technique which validates that the requirements are correct, complete, unambiguous, and logical to allow for the designing of test cases from those requirements. It is performed by QA teams.

Risk-Based Testing (RBT)

In Risk Based Testing, the functionalities or requirements are tested based on their priority. Risk-based testing includes testing of highly critical functionality, which has the highest impact on business and in which the probability of failure is very high.

The priority decision is based on the business need, so once priority is set for all functionalities then high priority functionality or test cases are executed first followed by medium and then low priority functionalities. Defects found in testing phases may also be progressed in a risk meeting or scrum meeting involving business stakeholders.

Risk-based testing is carried out typically in projects running as agile or rapid projects, where the business accepts the risk of bugs not being found if there is insufficient time available to test the entire software before delivery.

Sanity Testing

Sanity Testing is done to determine if a new software version is performing well enough out of the development environment to accept it for a major testing effort or not. If an application is crashing for the initial tests then the system is not stable enough for further testing. Hence a build or an application is assigned to fix it.

Note that sanity and smoke testing terms are often used interchangeably. Both are part of the system acceptance test of a new software version coming out from development, to confirm the new build is testable. In my thinking, a smoke test is a quick check of required (new) functionality, while a sanity test is a regression pass through of major previously tested functionality.

Scalability Testing

Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up – be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer.

Scenario Testing

Testing activity that uses scenarios based on a hypothetical story to help a person think through a complex problem or system through the testing environment.  Can also be associated to use-case testing models found in more rapid or agile development approaches. Other associated terms: work-flow testing and happy-path testing.

Security Testing

Security testing is performed by a specialist team of testers who will be trained in various methods for hacking or internal or external threats, and how to prevent these. Authorisation and authentication processes will be tested as well as internal data security and processes assessed. Some of this may be inhouse, but Penetration Testing of hacker access can often be outsourced to specialised penetration testing agencies.

An associated Mutation testing technique called Fuzz Testing also qualifies the credibility of the code towards hacking access.

Smoke Testing

Smoke Testing checks that no show stopper defect exists in a new build out of development which will prevent the testing team to test the application in detail.

Note that sanity and smoke testing terms and definitions are often used interchangeably. Both are part of the system acceptance test of a new software version coming out from development, to confirm the new build is testable. In my thinking, a smoke test is a quick check of required (new) functionality, while a sanity test is a regression pass through of major previously tested functionality.

Static Testing

Static Testing is a type of testing on the documentation, not the code. It involves reviews, walk-throughs, and inspections of the deliverables of the project. Naming conventions, requirements etc can be static tested as a project team – this can involve everyone.

Static testing is also applicable for test cases, test plans, and design documents. These can be peer or project reviewed. Another subset of this type of static document testing is Requirements Testing.

Stress Testing

This testing is done to check performance and error handling when a system is stressed beyond its specifications. Large heavy data loads are input, complex database queries are run and continuous input made to the system. This non-functional testing is often undertaken by technical testers or outsourced for automation.

System Testing

System Testing is normally Black-box type testing based on overall requirement specifications and covers all the combined parts of a system. There are many system, integration and regression testing techniques. The system “testing” phrase can also include many aspects of functional, non-functional, static and technical testing use-cases depending on project and team requirements and processes.

System Integration Testing (SIT)

SIT (System Integration Testing) is sometimes confused with Integration Testing. The later is testing at smaller connection levels, which can be done concurrently with other testing such as unit and system test cases.

SIT is a larger end-to-end test phase which generally runs after the full system is delivered and confirmed in a full supporting test environment.
As the name suggests, the focus of System integration testing is to test for errors related to integration among different applications, services, third party vendor applications etc. Often the full test team will be involved.

Unit Testing

Testing of an individual software component or module is termed as Unit Testing. It is typically done by the programmer and not by testers, as it requires a detailed knowledge of the internal program design and code. It may also require developing test driver modules or test harnesses and stubs.

Usability Testing

Under Usability Testing, user-friendliness checks are done. This may form part of system testing, or be controlled by business or customers. During the process of discovering the system, often user help documentation will also be created.

Vulnerability Testing

Vulnerability is a subset of security testing. The focus for vulnerability testing is with how easy a system can be hacked into, or degraded by malicious software, viruses or other external intrusions. It may also be termed Penetration Testing, and testing may be outsourced.

Volume Testing

Volume testing is a type of non-functional testing performed by the performance testing team. It sits as a subset under stress and load testing.

software testing timeline


The Software Testing Timeline by Joris Meerts is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License

References