JabRef References output

QuickSearch:   Number of matching entries: 0.

Search Settings

Beizer, B. Black-Box Testing: Techniques for Functional Testing of Software and Systems 1995   book  
Review: A General classification of test types applied in any context would be Dirty and Clean tests. -Dirty test (also negative test): A test whose primary purpose is falsification; that is, tests designed to break the software.

-Clean test (also positive test): A test whose primary purpose is validation; that is, tests designed to demonstrate the software's correct working.

A test strategy or technique is a systematic method used to select and/or generate tests which will be included in a test suite. In general there is two main approaches or strategies:

-Behavioral Testing: where strategies are based on requirements. this is the same as black-box testing. Functional testing is also under this.(e.g: execute all the dirty tests implied by the requirements.

-Structural Testing: where strategies are derived from the structure of the tested object.(e.g: execute every statement at least once) same as white-box and glass-box.

-Hybrid test strategies combine both strategies. each strategy is useful in cases.

The book also identifies several bug types: Unit/Component bugs. Integration bugs and system bugs. Such bugs are discovered more efficiently using a technique versus another, or even depending on the hybrid being used.

Back box techniques to generate test cases using Graphs (e.g: Transaction flow, finite state , data flow, timing model) these models are based on the behavior and not on code,they are used to prove that exact nodes exist and that the links between them are also exactly as expected. They are entirely based on the requirements and so they are black-box. This is similar in a sense to building models of the system to generate tests rather than code. (they could be used to test transformations too)

Domain testing: is only useful when requirements are explicitly specifying boundary and/or numeric conditions and constraints on some part of the system. by carefully exploring the domain with which the SUT should interact, several effective test cases could be generated.

Syntax testing: mostly useful in command driven applications(MS-DOS). (protocols) , this technique is not useful for modern compilers, since it repeats the work. It could however be applied when having SQL for instance.

Finite state testing (in menu based applications ).

Test Automation is important, as Manual Test has limitations (e.g: human errors, manual execution lead to in repeatable results if not done correctly). it also imposes limitations on number of tests which can be executed. which make it impossible to use for stress testing. or even to try out a scenario for reliability testing.

Classification of testing tools:

Coverage Tools based on structural coverage. The book mentions references to tools guides, and divide coverage tools into three categories:

1. Control-flow coverage tools typically modify the source code of the system and add statements which will be executed to observe which segments of the code were executed. this approach imposes some potential testing limitations with regards to using different environments than the production environment for example.

2. Profilers: they help in measuring object code coverage over the whole system ( which can't be done with a unit test except for on a very small level). they provides different coverage modes (deterministic and statistical)

3. Data flow and other coverage tools: measuring all uses, all definitions, all paths. and for example the call-tree cover which is highly important for integration testing.

Test Execution Automation :

Should have priority since any test design automation which generates a large number of tests will be useless without execution automation. the main ways to solve this (The most popular form of test driver is the capture/playback tool):

1. Writing code to test code might not be the best way. Since it introduces the dilemma of testing the test code, and testing the testing of the test code, and so on ..

2. Testing drivers: ( has references ), appropriate structural tools can be part of the package. drivers proceed in three different stages:

a. Setup Phase: for Loading initial prerequisites and other hardware or software elements. it also initializes instrumentation and coverage tools.

b. Execution Phase: performs re-initialization as necessary for each test. evaluates assertions and captures output. it resets instrumentation for every test.

c. Postmortem Phase: performs proper test verification through a criteria. Reports failures by exceptions. Compares actual to predicted outcomes, using smart comparison methods (e.g. allows you to specify what should and shouldn't be included in the comparison and with what tolerance). Passes execution data to coverage tool. Confirms path. Checks for residues and by-products. Passes control to debug package on test failure."

3.Capture/Playback : a fundamental tool in achieving transition from manual to automation. could be used in both test design and execution automation.

Test Design Automation:

  author = {Boris Beizer},
  title = {Black-Box Testing: Techniques for Functional Testing of Software and Systems},
  publisher = {John Wiley & Sons},
  year = {1995}
Bertolino, A. Software Testing Research: Achievements, Challenges, Dreams 2007 FOSE '07: 2007 Future of Software Engineering   inproceedings DOI  
Abstract: Software engineering comprehends several disciplines devoted to prevent and remedy malfunctions and to warrant adequate behaviour. Testing, the subject of this paper, is a widespread validation approach in industry, but it is still largely ad hoc, expensive, and unpredictably effective. Indeed, software testing is a broad term encompassing a variety of activities along the development cycle and beyond, aimed at different goals. Hence, software testing research faces a collection of challenges. A consistent roadmap of the most relevant challenges to be addressed is here proposed. In it, the starting point is constituted by some important past achievements, while the destination consists of four identified goals to which research ultimately tends, but which remain as unreachable as dreams. The routes from the achievements to the dreams are paved by the outstanding research challenges, which are discussed in the paper along with interesting ongoing work.
  author = {Bertolino, Antonia},
  title = {Software Testing Research: Achievements, Challenges, Dreams},
  booktitle = {FOSE '07: 2007 Future of Software Engineering},
  publisher = {IEEE Computer Society},
  year = {2007},
  pages = {85--103},
  doi = {}
Harrold, M. J. Testing: a roadmap 2000 ICSE '00: Proceedings of the Conference on The Future of Software Engineering   inproceedings DOI  
Abstract: Testing is an important process that is performed to support quality assurance. Testing activities support quality assurance by gathering information about the nature of the software being studied. These activities consist of designing test cases, executing the software with those test cases, and examining the results produced by those executions. Studies indicate that more than fiftty percent of the cost of software development is devoted to testing, with the percentage for testing critical software being even higher. As software becomes more pervasive and is used more often to perform critical tasks, it will be required to be of higher quality.

Unless we can find effcient ways to perform effective testing, the percentage of development costs devoted to testing will increase significantly. This report briefly assesses the state of the art in software testing, outlines some future directions in software testing, and gives some pointers to software testing resources.

Review: Quality Assurance escalading importanc. Software testing is one activivty to support it. It executes the software being studies and gather information about it.

Output data prodcued by the execction of the program with a particular test case provides a specification of the actual behavior. ( has a reference : strategic directions in software qualitty).

Testing is dynamic analysis, as opposed to static analysis ( e.g model checking ) which doesn' require execution. Automation and eas of use makes testing a much more powerful technique than static analysis. (software is executed in expected environment which gives confidence).

Testing Also has several limitations : it can't show the absence of faults, but only there presence. can't prove any qualities of the system. results obtained from test cases can't be generalized.

Fundemental research:

- Testing Componenet Based Systems:

Increased complexity of systems and building them from componenets, which encapsulate data and functionality and are configured at runtime. It can be looked at from either the two views : the providers or the users of the componenet. context independence vs context dependence. An important issue which affects tools and testing of COTS is the availability of source code. The paper presents references to several solutions for componenet providers. And also testing techniques for the users of components.

Research is needed to develop effective techniques for testing variour aspects of the componenets ( security, dependability ..etc) . and to identify the types of testing information the component user will need ( coverage of component for a particular use). this information needs to be modeled within the component and accessible.

- Testing Based on Precode Artifacts:

The use of requirements, design, or architectural specifications can help the testing process. The paper discusses Architectural specification as a precode artifact. Refernces to work in using it for assesing testability, and how it can be used in integration and unit testing. some more research in this area is still required.

- Testing Evolving Software:

Regression Testing attempts to validate modified software and ensure that no news errors are introduced into the previously tested code. a dicussion of research about seelcting the test cases to execute. but more techniques to help in prioritzing test cases to maximize or minimize certain aspects such as , coverge, cost or running time . Also techniques allowing us to asses the testability of both software and test cases. using precode artificat we can choose the most appropriate design whcih allows for testability. and for test cases which validate individual requirements may be more efficient to use in regression testing than when a single test cases that validates many requirement.

- Demonstrating Effectiveness Of Testing Technique:

Most existing techniques focus on test case selection . (behavior or code based ) . techniques to identify which classes of faults for which particular criteria is efective . and how could combining two techniques increase effectiveness.

- Establishing Effective Processes for testing:

- Using Testing Artifact :

Artificate including the execution traces of the software ran with test cases ( which statements were executed and the test results for example ) they can be stored and used when re executing the tests again. techniques based on investigating the traces to derive dynamic program slices and identify potential faulty code. also used to identify progarm invariants. Other techniues use coverage information to select test cases from a test suite for regression testing.

Visualization techniques for coverage eixtst to help maintenance.

- Other Testing techniques:

there has been alot of techniyqes for generating test data automatically, but these techniqques don't scale beyond the unit testing level. there is a need to automcatically generate test data for systems, using the code of the precode represenation of the system.


techniques need to be implemented as tools, which can scale to large systems. and they should account for compuational trade-offs. Automatically creating the tool seems a promising approach .( similiar to generate compilers)

  author = {Harrold,, Mary Jean},
  title = {Testing: a roadmap},
  booktitle = {ICSE '00: Proceedings of the Conference on The Future of Software Engineering},
  publisher = {ACM},
  year = {2000},
  pages = {61--72},
  doi = {}

Created by JabRef on 07/02/2009.

Maintained by Amr Al Mallah. Last Modified: 2009/02/07 20:58:49.