Software Testing, Black box/White box testing, Unit, Component & Integration Testing, Load testing, Stress testing, Difference between QA and Testing, Tester & Developer Ratio, Software Quality Assurance, Software Bug, Reason of Bugs, issues, exception, errors, tutorial, testing,

http://www.knowsh.com > Notes > Software Testing > Software Testing - Chapter 1
Priya Software Testing, Black box, White box, Unit, Component, Integration Testing, Load testing, Stress testing, QA vs Testing, Software Quality Assurance, Software Bug Software Testing, Black box/White box testing, Unit, Component & Integration Testing, Load testing, Stress testing, Difference between QA and Testing, Tester & Developer Ratio, Software Quality Assurance, Software Bug, Reason of Bugs.

Software Testing - Chapter 1

Software Testing - Chapter 1

Software Testing, Black box, White box, Unit, Component, Integration Testing, Load testing, Stress testing, QA vs Testing, Software Quality Assurance, Software Bug (Software Testing)

Introduction (Descriptive)

Software Testing, Black box/White box testing, Unit, Component & Integration Testing, Load testing, Stress testing, Difference between QA and Testing, Tester & Developer Ratio, Software Quality Assurance, Software Bug, Reason of Bugs.

Details

1. What is black box/white box testing?

Black-box and white-box are test design methods.  Black-box test design treats the system as a “black-box”, so it doesn’t explicitly use knowledge of the internal structure.  Black-box test design is usually described as focusing on testing functional requirements.  Synonyms for black-box include:  behavioral, functional, opaque-box, and closed-box.  White-box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the software to guide the selection of test data.  Synonyms for white-box include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural".  Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged.  In practice, it hasn't proven useful to use a single test design method.  One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one.  Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented.  Note that any level of testing (unit testing, system testing, etc.) can use any test design methods.  Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.

2. What are unit, component and integration testing?

Note that the definitions of unit, component, integration, and   integration testing are recursive:

  • Unit:  The smallest compliable component.  A unit typically is the work of one programmer (At least in principle).  As defined, it does not include any called sub-components (for procedural languages) or communicating components in general.
  • Unit Testing:  in unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components.  Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation.
  • component: a unit is a component. The integration of one or more components is a component. Note:-  The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.
  • component testing: same as unit testing except that all stubs and simulators are replaced with the real thing.

    Two components (actually one or more) are said to be integrated when:

  1. They have been compiled, linked, and loaded together.
  2. They have successfully passed the integration tests at the interface between them.

Thus, components A and B are integrated to create a new, larger, component (A,B). Note that this does not conflict with the idea of incremental integration—it just means that A is a big component and B, the component added, is a small one.

  • Integration testing: carrying out integration tests. Integration tests (After Leung and White) for procedural languages. This is easily generalized for OO languages by using the equivalent constructs for message passing.  In the following, the word "call" is to be understood in the most general sense of a data flow and is not restricted to just formal subroutine calls and returns – for example, passage of data through global data structures and/or the use of pointers.

        Let A and B be two components in which A calls B.

        Let Ta be the component level tests of A

        Let Tb be the component level tests of B

        Tab The tests in A's suite that cause A to call B.

        Tbsa  The tests in B's suite for which it is possible to sensitize A -- the inputs are to A, not B.

        Tbsa + Tab == the integration test suite (+ = union).

    Note: Sensitize is a technical term.  It means inputs that will cause a routine to go down a specified path.  The inputs are to A.  Not every input to A will cause A to traverse a path in which B is called.  Tbsa is the set of tests which do cause A to follow a path in which B is called.  The outcome of the test of B may or may not be affected.

    There have been variations on these definitions, but the key point is that it is pretty darn formal and there's a goodly hunk of testing theory, especially as concerns integration testing, OO testing, and regression testing, based on them.

    As to the difference between integration testing and system testing. System testing specifically goes after behaviors and bugs that are properties of the entire system as distinct from properties attributable to components (unless, of course, the component in question is the entire system). Examples of system testing issues: Resource loss bugs, throughput bugs, performance, security, recovery, Transaction synchronization bugs (often misnamed "timing bugs").

3. What's the difference between load and stress testing?

One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested” nor subjected to a meaningful stress test.

  • Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, MIPS, interrupts, etc.) needed to process that load.  The idea is to stress a system to the breaking point in order to find bugs that will make that break  potentially harmful.  The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data).  Bugs and failure modes discovered under stress testing may or  may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.
  • Load testing is subjecting a system to a statistically representative (usually) load.  The two main reasons for using such loads is in support of software reliability testing and in performance testing.  The term "load testing" by itself is too vague and imprecise to warrant use. For example, do you mean representative load," "overload," "high load," etc.  In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions >suffer (application-specific) excessive delay.

    A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, "load testing" is merely testing at the highest transaction arrival rate in performance testing.

4. What's the difference between QA and testing?

QA is more a preventive thing, ensuring quality in the company and therefore the product rather than just testing the product for software bugs?

TESTING means "quality control"

QUALITY CONTROL measures the quality of a product

QUALITY ASSURANCE measures the quality of processes used to create a quality product.

5. What is the best tester to developer ratio?

Reported tester: developer ratios range from 10:1 to 1:10.

There's no simple answer. It depends on so many things, Amount of reused code, number and type of interfaces, platform, quality goals, etc.

It also can depend on the development model.  The more specs, the less testers.  The roles can play a big part also.  Does QA own beta?  Do you include process auditors or planning activities?

These figures can all vary very widely depending on how you define "tester" and "developer".  In some organizations, a "tester" is anyone who happens to be testing software at the time -- such as their own.  In other organizations, a "tester" is only a member of an independent test group.

It is better to ask about the test labor content than it is to ask about the tester/developer ratio.  The test labor content, across most applications is generally accepted as 50%, when people do honest accounting. For life-critical software, this can go up to 80%.

6. What is Software Quality Assurance?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

7. What is Software Testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.

8. What are some recent major computer system failures caused by Software bugs?

  • In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.
  • A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
  • According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
  • In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.
  • News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.
  • In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
  • In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
  • Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
  • In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
  • A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
  • In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
  • The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
  • In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
  • January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
  • In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
  • A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's inability to handle credit cards with year 2000 expiration dates.
  • In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers."
  • In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human error.'
  • On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
  • Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
  • Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

9. Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:

In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family the most skillful healer was. He replied, "I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."

"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors."

"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."

10. Why does Software have bugs?

  • Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
  • Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered.
  • Programming errors - programmers, like anyone else, can make mistakes.
  • changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.
  • time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

    Egos - people prefer to say things like:

  • 'No problem'
  • 'Piece of cake'
  • 'I can whip that out in a few hours'
  • 'It should be easy to update that old code'

     Instead of:

  • 'That adds a lot of complexity and we could end up making a lot of mistakes'
  • 'We have no idea if we can do that; we'll wing it'
  • 'I can't estimate how long it will take, until I take a close look at it'
  • 'We can't figure out what that old spaghetti code did in the first place'

     If there are too many unrealistic 'no problems', the result is bugs.

  • Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
  • software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

11. How can new Software QA processes be introduced in an existing organization?

  • A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
  • Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
  • For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
  • In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications or expectations.

About knowsh.com

We are tring to provide a common plateform to the students and professionals to share their knowledge among others because knowledge is the only thing that increase when it is shared.

Contact links

Follow Us On