15 minute before interview

What is software testing?
Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
1.Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system right?]
2.Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
3.Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. [Validation: Are we building the right system?]

Testing is the process of analyzing a software item to detect the differences between existing and required conditions (defects/errors/bugs) and to evaluate the features of the software item.

Note: The purpose of testing is Verification, Validation, and error detection in order to find problems and purpose of finding those problems is to get them fixed.

Why testing cannot ensure quality?

Testing in it self cannot ensure the quality of software. Testing can only give certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

Quality: Quality software is reasonably bug-free, delivered on time and within the budget, meets requirements (or) expectations, and is maintainable. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.


Software Quality Assurance is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and review of coding, testing and implementation. It involves the entire software development process. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.

Quality Assurance and Development of a product are parallel activities.
The role of quality assurance is a superset of testing. Its mission is to minimize the risk of project failure.

Difference between QA and Testing

Testing means quality control.
Quality control measures the quality of a product.
Quality Assurance measures the quality of processes used to create a quality product.

Mission of Testing

The mission of testing team is not merely to perform testing, but help to minimize the risk of product failure.
Testers look for the manifest problems in the product, potential problems, and the absence of problems.

It is important to recognize that testers are not out to “break the code”. We are not out to embarrass or complain, just to inform. We are human meters of product quality.

Testing Scope
Functional Testing
Interface Testing
Acceptance Testing
Final Acceptance Testing

Usability Testing defined as “In system testing, testing which attempts to find any human factor problems”.
Usability Testing can simply defined as, “testing the software from the user point of view”.

Why usability testing should be included in Testing cycle?
QA have a certain responsibility for usability testing. The main reason is the ‘perspective differences’ or different viewpoints of the various teams involved in the development of the software.
Ex: When a totally new application is being developed, how many of the test team does have first hand on exp or the expert knowledge of the underlying business logic/process? Usually minimal.

Even if the testers are in deed experts in their area, they may miss a big picture, so usability testing is a sub-specialty that often is not best left to the average tester. Only some specific personnel should be responsible for doing usability testing.

2 How to approach Usability testing?How to implement usability testing?
The best way to implement usability testing is a two fold:
Firstly from a design and development perspective, and testing perspective.

From a testing view point Usability testing should be added to the testing cycle by including a formal “User Acceptance Test”. Getting actual users to sit down with the software and attempt to perform “normal” working tasks, when the software is ready to release would do this. User testers must always take the customer’s point of view in their testing. UAT is an excellent exercise, because not only will it give you their initial impression of the system and tell how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises.

2.2 Benefits of Usability Testing
It makes the software would be more user-friendly. The end result will be:
Better quality software.
Software is easier to use.
Users more readily accept software.
Shortens the learning curve for new users.

Classification of Errors by Severity
The definitions of the severity levels themselves change depending on the type of system.
Ex: A catastrophic defect in a nuclear system means the fault can result in death or environmental harm. A catastrophic defect in database system means that the fault can cause loss of valuable data.
Therefore the system itself determines the severity of a defect based on the context for which the defect applies.

5Level Error Classification method

Catastrophic: Defects that could cause disastrous consequences for the system in question. Ex: Critical loss of: data, system availability, security, safety etc.

Severe: Defects that could cause very serious consequences for the system in question. Ex: A function is severely broken, cannot be used and there is no workaround.

Major: Defects that could cause significant consequences for the system in question. A defect that needs to be fixed but there is workaround.
Ex: Loosing data from a serial device during heavy loads.
Function badly broken but workaround exits.

Minor: Defects that could cause small or negligible consequences for the system in question. Easy to recover and there will be workaround.
Ex: Error messages misleading.
Displaying the output in a font or format other than what the customer desired.

No Effect: Trivial defects that could cause no negative consequences for the system in question. Such defects normally produce no erroneous outputs.
Ex: Simple types in documentation.
Bad layout or misspelling on the screen.

Regression Testing
Regression testing is selective retesting of the system with an objective to ensure the bug fixes work and those bug fixes have not caused any un-intended effects in the system.

Types of Regression testing There are two types of regression testing.
A “final regression testing” is being done to validate the gold master builds and “regression testing” being done to validate the product and failed test cases between system test cycles.
A normal regression testing can use the builds for a period that is exactly needed for the test cases to be executed. However unchanged build is highly recommended for each cycle of regression testing.

What kinds of testing should be considered?
Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, and conditions.
Unit testing - the most 'micro' scale of testing: to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine, if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing - black-box type testing geared to functional requirements of an application. Testers should do this type of testing. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing).
System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

End-to-End testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Regression testing - retesting after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing - determining if software is satisfactory to an end-user or customer.

Comparison testing - comparing software weaknesses and strengths to competing products.

Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Objectives of testing:
Testing is the process of executing a program with the intent of finding an error.
1)A good test case is the one that has a high probability of finding an as yet undiscovered error.

2)A successful test is one that uncovers an as yet undiscovered error.

3)Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum effort.

Secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications.
The data collected through testing can also provide an indication of software’s reliability and quality.

White Box Testing
White Box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that:
1) Guarantee that all independent paths within a module have been exercised at least once.
2) Exercise all logical decisions on their true and false sides.
3) Execute all loops at their boundaries and within their operational bounds.
4) Exercise internal data structures to ensure their validity.

Nature of Software defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.
General processing tends to be well understood while special case processing tends to be prone to errors.
Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.
Typographical errors are random.

Basis path testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths.
Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs
Flow Graphs can be used to represent control flow in a program and can help in the derivation of the basis set.
Node represents one or more procedural statements.
Edges between the nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements.
A region in a flow graph is an area bounded by edges and nodes.
Each node that contains a condition is called predicate node.

Cyclomatic Complexity is a metric that provides a quantitative measure of the logical complexity of a program.
It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

Cyclomatic Complexity V(G) for a flow graph G is equal to:
1) The number of regions in the flow graph.
2) V(G) = E – N + 2 where E is the no of edges and N is no of nodes.
3) V(G) = P + 1 where P is the no of predicate nodes.

Basis Set
An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a procedural design.

Deriving Test Cases
From the design or source code derive a flow graph.
Determine the Cyclomatic Complexity of this flow graph.
1) Even without flow graph the CC can be determined by counting the no of conditional statements in the code.Determine a basis set of linearly independent paths.
2) Predicate nodes are useful for determining the necessary paths.
Prepare test cases that will force execution of each path in the basis set.
3) Each test case is executed and compared to the expected results.

Automating Basis Set Derivation
The derivation of the flow graph and the set of basis paths are amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix.
A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph.
Each row and column corresponds to a particular node and the matrix corresponds to connections (edges) between the nodes.
By adding link weight to each matrix entry, more information about the control flow can be captured. In its simplest form a link weight is 1 if an edge exists and 0 if an edge does not exist.

Other types of link weights can be represented as:
1) The probability that an edge will be executed.
2) The processing time expended during link traversal.
3) The memory required during link traversal.
4) The resources required during link traversal.

Loop Testing
This white box technique focuses exclusively on the validity of loop constructs.
1) Simple loops
2) Nested loops
3) Concatenated loops
4) Unstructured loops

Simple loops
The following tests should be applied to simple loops where n is the maximum no of allowable passes.
1) Skip the loop entirely.
2) Only pass once through the loop.
3) m passes through the loop where m < n. 4) n –1, n, n + 1 passes through the loop. Nested loops
The testing of nested loops cannot extend the technique of simple loops, since this would result in a geometrically increasing number of test cases.
One approach for nested loops:
1) Start at the inner most loops, set all other loops to minimum values.
2) Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3) Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4) Continue until all loops have been tested.

Concatenated loops
Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (loop counter for one is loop counter for another) then the nested approach can be used.

Unstructured loops
This type of loop should be redesigned not tested!!

Other white box techniques
Condition testing-Exercises the logical conditions in a program
Data flow testing-selects test paths according to the locations of definitions and use of variable in a program.

Note: White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages.

Black Box Testing
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing.

This type of testing attempts to find errors in the following categories:
1) Incorrect or missing functions.
2) Interface errors.
3) Errors in data structures or external database access.
4) Performance errors.
5) Initialization and termination errors.

Tests are designed to answer the following questions.
1) How is the function’s validity tested?
2) What classes of input will make good test cases?
3) Is the system particularly sensitive to certain input values?
4) How are the boundaries of a data class isolated?
5) What data rates and data volume can the system tolerate?
6) What effect will specific combinations of data have on system operation?

Test Cases should be derived which:
1) Reduces the number of additional test cases that must be designed to achieve reasonable testing.
2) Tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived.
Equivalence Partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:
1) If an input condition specifies a range one valid and two invalid equivalence classes are defined.
2) If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3) If an input condition specifies a member of a set, then one valid and one invalid equivalence classes are defined.
4) If an input condition is Boolean, then one valid and one invalid equivalence classes are defined.

Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values.
It complements equivalence partitioning since it selects test cases at the edges of a class, rather than focusing on input conditions solely. BVA derives test cases from the output domain also.

Boundary Value Analysis guidelines include:
1) For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2) If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and just below these limits.
3) Apply the above two guidelines for output.
4) If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundaries.

Cause Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions.

There are four Steps:
1) Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2) A cause effect graph is developed.
3) The graph is converted to a decision table.
4) Decision table rules are converted into test cases.

What's a 'test case'?
1) A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
2) Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What is a ‘test script’?
A test script contains the detailed sequence of manual and automated actions, as well as the setup information to execute a test case.
A test script contains sections for:
1) Test setup
2) Actions or Procedures to complete
3) Expected results
4) Actual results

What if the software so buggy it can’t really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

Why does software has bugs?
Miscommunication or no communication: as to specifics of what an application should or shouldn’t do (the application’s requirements).
Programming errors: Programmers , like anyone else can make mistakes.
Poorly documented code: It’s tough to maintain and modify code that is badly written or poorly documented. The result is bugs.
Changing requirements: The customer may not understand the effects of changes, or may understand and request them anyway – redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc.
Software development tools: visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop testing are:
1) Dead lines (release deadlines, testing deadlines, etc).
2) Test cases completed with certain percentage passed.
3) Test budget depleted.
4) Coverage of code / functionality / requirements reaches a specified point.
5) Bug rate falls below a certain level.
6) Beta or alpha testing period ends.

What if there is no enough time for thorough testing?
Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. Considerations can include:
1) Which functionality is most important to the project’s intended purpose?
2) Which functionality is most visible to the user?
3) Which functionality has the largest safety impact?
4) Which functionality has the largest financial impact on users?
5) Which aspect of the application is most important to the customer?
6) Which aspects of the application can be tested early in the development cycle?
7) Which part of the code are most complex, and thus most subject to errors?
8) Which parts of the application were developed in rush or panic mode?
9) Which aspects of similar / related previous projects caused problems?
10) Which aspects of similar / related previous projects had large maintenance expenses?
11) What kinds of tests could easily cover multiple functionalities?

Ten commandments for software testing:
1) Test early and test often.
2) Integrate the application development and testing life cycles. You will get better results and you won’t have to mediate between two armed camps in your IT shop.
3) Formalize a testing methodology, you will test everything the same way and you will get uniform results.
4) Develop a comprehensive test plan. It forms the basis for the testing methodology.
5) Use both static and dynamic testing.
6) Define your expect results.
7) Understand the business reason behind the application. You will write a better application and better testing scripts.
8) Use multiple levels and types of testing (regression, systems, integration, stress and load).
9) Review and inspect the work, it will lower costs.
10) Don’t let programmers check their own work; they’ll miss their own errors.

What are the five common problems in the software development process?
Poor requirements-if the requirements are unclear, incomplete, too general, or not testable, there will be problems.
Unrealistic schedule-if too much work is crammed in too little time problems are inevitable.
Inadequate testing-no one will know whether or not the programs is any good until the customer complains or system crash.
Featuritis-requests to pile on new features after development is underway, extremely common.
Miscommunication-if developers don’t know what’s needed or customers have erroneous expectations, problems are guaranteed.

What are the five common solutions to software development problems?
Solid requirements
Realistic schedule
Adequate testing
Stick to initial requirements as much as possible
Communication

Comments

Willie Griffeth said…
Bespoke Software development is usually thought of as being synonymous with custom development, primarily because the development process involves designing the application to meet the needs of the specific business, large organization or other customer. visit website Official Phases of SDLC & The Software Development Process.

Popular posts from this blog

Progress 4GL interview questions for QAD technology

QAD interview questions for SE and Eb2 version

Use of API in QAD EE