share with

Thursday, 17 May 2012

Ground of Testing

What is Software Testing?
 software testing is the process of executing
software in a controlled manner, in order to answer the question "Does the software
behave as specified?".
Software testing is often used in association with the terms verification and validation.
Verification is the checking or testing of items, including software, for conformance and
consistency with an associated specification. Software testing is just one kind of
verification, which also uses techniques such as reviews, analysis, inspections and
walkthroughs.
Validation is the process of checking that what has been specified is what the
user actually wanted.

· Validation: Are we doing the right job?
· Verification: Are we doing the job right?

The term bug is often used to refer to a problem or fault in a computer. There are software
bugs and hardware bugs. The term originated in the United States, at the time when
pioneering computers were built out of valves, when a series of previously inexplicable
faults were eventually traced to moths flying about inside the computer.

Software testing should not be confused with debugging. Debugging is the process of
analyzing and locating bugs when software does not behave as expected. Although the
identification of some bugs will be obvious from playing with the software, a methodical
approach to software testing is a much more thorough means of identifying bugs.

Debugging is therefore an activity which supports testing, but cannot replace testing.
However, no amount of testing can be guaranteed to discover all bugs.

Other activities which are often associated with software testing are static analysis and
dynamic analysis. Static analysis investigates the source code of software, looking for
problems and gathering metrics without actually executing the code. Dynamic analysis
looks at the behaviour of software while it is executing, to provide information such as

4.2 Outline
A test plan shall have the following structure:
a) Test plan identiÞer;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) StafÞng and training needs;
n) Schedule;
o) Risks and contingencies;
p) Approvals.
The sections shall be ordered in the sp

Test items
Identify the test items including their version/revision level. Also specify characteristics of their transmittal
media that impact hardware requirements or indicate the need for logical or physical transformations before
testing can begin (e.g., programs must be transferred from tape to disk).
Supply references to the following test item documentation, if it exists:
Requirements speciÞcation;
Design speciÞcation;
Users guide;
Operations guide;
Installation guide.

Features to be tested
Identify all software features and combinations of software features to be tested. Identify the test design
speciÞcation associated with each feature and each combination of features.

Features not to be tested
Identify all features and signiÞcant combinations of features that will not be tested and the reasons.

What does it take to build the best Test Organization.

Attitude
Conviction
Killing instinct to dig out and deliver

Culture
Work towards passion and not money
Work towards technology, sharing and learning
Power of Ethics

What we do:

Building silicon with xyz architecture.
putting on e-linux, building an image and then putting on top of it.
Wireless network support followed by release.

Some fun time:

1. Reporting all passes and sending the report without actually executing the tests. The product getting backfired from the customer premises. The industry does not spare mistakes, and this one can be worst.

2.

Templates:

Test Plan/ Test Case

Priority and Severity states and trade-offs between them: Mapping to our jargon Blocker and Crasher.

Release Blockers: Last Severity 1 but 1st priority/BLOCKER (from our perspective):

Examples of Extreme Cases:

Has anyone come across a Microsoft Product which specifies "Win" instead of "Windows, but you won't be able to find it. Why, because as a Tester you might be logging it as a last severity, but for the Vendor/Microsoft it becomes priority 1/BLOCKER.

Test Blockers: Is a typical case in which you log the crash bug(Blocker), but it is taken as a last priority by the management. Why???

In one of the instances, a vendor had released a version of OS, which specified that after installing the OS on a new machine, pull out the cable to the HDD and the OS will crash and would be completely un-recoverable and would be required to re-install the entire OS again. Still the vendor released, Why? Because the vendor would not expect the end user to do it.

Examples of Extreme Cases: S 1 but last priority: Crash





Effective Execution and Reporting:

Importance of Logs

Importance of logging with respect to not logging.


Automation: What takes it to implement.


The Road Ahead:

Notepad to write java files to code generating wizards. Importance of testing.

A couple of url's that could come in as handy:

http://en.wikipedia.org/wiki/Software_testing

http://en.wikipedia.org/wiki/Scenario_test

http://en.wikipedia.org/wiki/Test_suite

http://en.wikipedia.org/wiki/Software_engineering

http://en.wikipedia.org/wiki/Test_script

http://en.wikipedia.org/wiki/Regression_testing

stickyminds.com

whatistesting.com

scriptinganswers.com

perlmonks.com

sqa-tester.com

indiantestingboard.org

No comments:

Post a Comment