Acceptance Test Engineering Guide

Why acceptance testing?

Patterns & practices has produced just a few guides related to testing (including performance testing, security testing of web apps, and testing of .NET application blocks). However, we hear a lot of requests from our customers for guidance on testing and test strategy in general, and also guidance on every type of testing you can think of. Based on this customer feedback, and a look at what guidance was available, we determined that acceptance testing was the next area to invest in.

What is acceptance testing?

Working definitions for a number of terms are available in our Draft Glossary. The current definition that is framing our work and discussions is:

  • Acceptance Testing: Planned evaluation of a system by customers/customer proxies to assess to what degree it satisfies their expectations.

We are open to suggestions on this and other terms. Please leave comments on the Draft Glossary page.

What are we producing?

This guide is the first in the series of three dedicated to acceptance testing and requirements engineering:

  • Acceptance test engineering guide
  • Acceptance test automation guide
  • Tool support for acceptance test-driven development.

The first guide will cover the discipline of acceptance testing from several perspectives and contexts. It will provide models, heuristics and a set of actionable job aides rooted in a sample app. The focus is on:

  • How to Plan for Acceptance Testing
  • What Kinds of Acceptance Tests to Run
  • How to Create and Run Acceptance Tests
  • Defining What “Done” Means
  • How to Justify Your Approach

What types of things you can learn in the guide?

If any of the following goals apply to you, you will want to check out the the guide.

You can download the Acceptance Test Engineering Guide from here.


MSDN: Unit Testing Web Applications


Unit testing, sometimes referred to as developer testing, focuses on testing small pieces of code, such as a class, that a developer is writing. These tests are critical for helping you ensure that the pieces you build work as expected and will operate correctly when combined with other parts of the application. Such testing helps support management of the application over time by ensuring that changes you make don’t inadvertently affect other parts of the system.

This chapter shows you how to get started unit testing JavaScript as well as server-side code, but does not cover all aspects of unit testing. References to more detailed discussions about unit testing can be found at the end of this chapter. While the unit tests for the Mileage Stats application were written using Test-First Development (or Test-Driven Development), this chapter will only cover the test-oriented aspects of unit testing, not the design aspects.

This chapter does not cover other important aspects of testing, such as performance, stress, security, automation, deployment, localization, and globalization. Nor does it discuss other important aspects to consider when testing the client side, such as cross-browser compatibility or usability. However, these areas are important for you to consider when testing your web application.

In this chapter you will learn:

  • How to get started unit testing your JavaScript and ASP.NET MVC code.
  • The arrange-act-assert unit test structure.
  • Techniques to isolate your tests and components.
  • Things you should consider when testing your jQuery UI widgets.

The technologies discussed in this chapter are QUnit, used to test your JavaScript and jQuery client-side code, and and Moq, for testing your server-side, ASP.NET MVC code.
Continue reading

Error Guessing in Software Testing

Error-guessing in software testing can find number of faults that systematic techniques may be fail to attend. Test cases are derived from experience of where defects have taken place in the past or software tester has acumen as to where defects can take place in the future.

Error-guessing should be used as a ‘mopping-up’ technique or as a supplement to systematic techniques, but not as the first choice approach.

Some persons are genius in detecting defects thanks to intuition and experience. There is no procedure for a greatly intuitive and ad hoc process.

Main ideas:

  • make a list of likely defects or error-prone situations -> test cases
  • identify test cases associated with assumptions that may have been made when writing the specifications

See an example: “Binary Sort”

  • only one entry is in table
  • table size is power of 2
  • table size is (power of 2) +/- 1

See another example: “Sorting subroutine”

  • input list is empty
  • single entry in input list
  • all entries are of the equal value
  • entries have already been sorted

A special form:

Data-structure based testing Look at data structures used and consider cases related to the structures being used.

See an example: “Linked list”

  • zero components
  • only one component
  • one less than max quantity of components
  • max quantity of components

Software Installation / Uninstallation Testing

Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.

Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.

If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.

In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.

We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.

To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!

You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.
Continue reading

Software Testing Types

Just as a reminder:

Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.

Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.