Software Testing A Brief Primer

What is Software Testing?

There are many published definitions of software testing, however, all of these definitions
boil down to essentially the same thing: software testing is the process of executing
software in a controlled manner, in order to answer the question "Does the software
behave as specified? "

Software testing is often used in association with the terms verification and validation.
Verification is the checking or testing of items, including software, for conformance and
consistency with an associated specification. Software testing is just one kind of
verification, which also uses techniques such as reviews, analysis, inspections and
walkthroughs. Validation is the process of checking that what has been specified is what the
user actually wanted.

· Validation: Are we doing the right job?

· Verification: Are we doing the job right?

The term bug is often used to refer to a problem or fault in a computer. There are software
bugs and hardware bugs. The term originated in the United States, at the time when
pioneering computers were built out of valves, when a series of previously inexplicable
faults were eventually traced to moths flying about inside the computer.

Software testing should not be confused with debugging. Debugging is the process of
analyzing and locating bugs when software does not exist as expected. Although the
identification of some bugs will be obvious from playing with the software, a methodical
approach to software testing is a much more thorough means of identifying bugs.

Debugging is there before an activity which supports testing, but can not replace testing.
However, no amount of testing can be guaranteed to discover all bugs.

Other activities which are often associated with software testing are static analysis and
dynamic analysis. Static analysis investigates the source code of software, looking for
problems and gathering metrics without actually executing the code. Dynamic analysis
looks at the behavior of software while it is executing, to provide information such as

4.2 Outline

A test plan will have the following structure:

a) Test plan identierer;

b) Introduction;

c) Test items;

d) Features to be tested;

e) Features not to be tested;

f) Approach;

g) Item pass / fail criteria;

h) Suspension criteria and compliance requirements;

i) Test deliverables;

j) Testing tasks;

k) Environmental needs;

l) Responsibilities;

m) Stafngng and training needs;

n) Schedule;

o Risks and contingencies;

p) Approval.

The sections will be ordered in the sp

Test items

Identify the test items including their version / revision level. Also specify characteristics of their transmittal

media that impact hardware requirements or indicate the need for logical or physical transformations before

testing can begin (eg, programs must be transferred from tape to disk).

Supply references to the following test item documentation, if it exists:

Requirements speciccation;

Design speciccation;

Users guide;

Operations guide;

Installation guide.

Features to be tested

Identify all software features and combinations of software features to be tested. Identify the test design
specifications associated with each feature and each combination of features.

Features not to be tested

Identify all features and signiccant combinations of features that will not be tested and the reasons.

What does it take to build the best Test Organization.



Killing instinct to dig out and deliver


Work towards passion and not money

Work towards technology, sharing and learning

Power of Ethics

What we do:

Building silicon with xyz architecture.

putting on e-linux, building an image and then putting on top of it.

Wireless network support followed by release.

Some fun time:

1. Reporting all passes and sending the report without actually executing the tests. The product getting backfired from the customer promises. The industry does not spare mistakes, and this one can be worst.



Test Plan / Test Case

Priority and Severity states and trade-offs between them: Mapping to our jargon Blocker and Crasher.

Release Blockers: Last Severity 1 but 1st priority / BLOCKER (from our perspective):

Examples of Extreme Cases:

Has anyone come across a Microsoft Product which specifics "Win" instead of "Windows, but you will not be able to find it. Why, because as a Tester you might be logging it as a last sever, but for the Vendor / Microsoft it becomes priority 1 / BLOCKER.

Test Blockers: Is a typical case in which you log the crash bug (Blocker), but it is taken as a last priority by the management. Why ???

In one of the instances, a vendor had released a version of OS, which specified that after installing the OS on a new machine, pull out the cable to the HDD and the OS will crash and would be completely un-recoverable and would be required to re-install the entire OS again. Still the vendor released, Why? Because the vendor would not expect the end user to do it.

Examples of Extreme Cases: S 1 but last priority: Crash

Effective Execution and Reporting:

Importance of Logs

Importance of logging with respect to not logging.

Automation: What takes it to implement.

The Road Ahead:

Notepad to write java files to code generating wizards. Importance of testing.

A couple of url's that could come in as handy:

Source by Abhinav Vaid