Subscribe

RSS Feed (xml)

Powered By

Skin Design:
Free Blogger Skins

Powered by Blogger

search topic

Showing posts with label testing interview questions. Show all posts
Showing posts with label testing interview questions. Show all posts

Monday, October 6, 2008

Software Testing Interview Questions 4

How you make positive that the whole thing was covered in Test Cases?

User1

Group 1

Task-1

Task-2

User 2

Group 2

Task3 and Task-4

User 3

Group 3

Task5 and Task 6

Write top 5 test cases for the above in security level.

How you handle your Test Cases?

What is your scenario if you find a bug, while you have product release in next one Hour?

What is your Approach when you find 10 Sev-1 bugs in 50 test cases?

What do you do if you have 10 Sev-1 bugs, 20 Sev-2 bugs and 5 Sev-3 bugs in 50 test cases?

Write Test plan/ Test Case for Hot mail Login Screen?

If myspace asks you to carry out Performance testing for website, what are the inputs you ask from them?

Can you combine 10 GUI Map files into a Single GUI Map file in Win runner.

User A is changing a GUI Map file. Can User B see the modifications made by User A?

Will Rapid Test Script Wizard works in Web testing?

How do you start performance testing?

What are the Outputs for Performance testing?

Tell me the top 4 challenges in your Career?

What is Delegation?

Write Test Strategy for ball pen

Top 5 test cases for digital watch

Top 5 test cases for Elevator

What are the Challenges for Tester?

What is exception handling in Win runner?

How Recovery manager Works in Win runner?

How do you handle scripts in Win runner when your Application is changing repeatedly?

There is a rendezvous point in Load runner Transaction for 100 users. 50 users are not reached the point. What the 50 users do in this time who reached the rendezvous point?

What are the components in Load runner?

Write the Contents of Test Plan?

What are the Sev-1 Bugs in calculator?

What are Sev-4 bugs in Calculator?

What is Traceability Matrix?

What do you do if you are not able to connect SQL Server?

What is Entry Criteria for Testing?

What is Exit Criteria?

Software Testing Interview Questions 4

How you make positive that the whole thing was covered in Test Cases?

User1

Group 1

Task-1

Task-2

User 2

Group 2

Task3 and Task-4

User 3

Group 3

Task5 and Task 6

Write top 5 test cases for the above in security level.

How you handle your Test Cases?

What is your scenario if you find a bug, while you have product release in next one Hour?

What is your Approach when you find 10 Sev-1 bugs in 50 test cases?

What do you do if you have 10 Sev-1 bugs, 20 Sev-2 bugs and 5 Sev-3 bugs in 50 test cases?

Write Test plan/ Test Case for Hot mail Login Screen?

If myspace asks you to carry out Performance testing for website, what are the inputs you ask from them?

Can you combine 10 GUI Map files into a Single GUI Map file in Win runner.

User A is changing a GUI Map file. Can User B see the modifications made by User A?

Will Rapid Test Script Wizard works in Web testing?

How do you start performance testing?

What are the Outputs for Performance testing?

Tell me the top 4 challenges in your Career?

What is Delegation?

Write Test Strategy for ball pen

Top 5 test cases for digital watch

Top 5 test cases for Elevator

What are the Challenges for Tester?

What is exception handling in Win runner?

How Recovery manager Works in Win runner?

How do you handle scripts in Win runner when your Application is changing repeatedly?

There is a rendezvous point in Load runner Transaction for 100 users. 50 users are not reached the point. What the 50 users do in this time who reached the rendezvous point?

What are the components in Load runner?

Write the Contents of Test Plan?

What are the Sev-1 Bugs in calculator?

What are Sev-4 bugs in Calculator?

What is Traceability Matrix?

What do you do if you are not able to connect SQL Server?

What is Entry Criteria for Testing?

What is Exit Criteria?

Software Testing Interview Questions 5

1. What you mean by Win Registry and what is its purpose?
2. What is the command to call up Win Registry?
3. What is IIS?
4. What is XML?
5. Dissimilarity between WELL form XML and XML?
6. Automation Tools – Win-Runner
7. How can you carry out load and stress testing on a single ATM machine?
8. What are ACID properties?
9. What is replication? Tell various kinds of replications?
10. Write a sql query to get ‘Last Day’ of the last month.
11. How can you test a ‘calculator’?
12. How can you test a ‘Coke’ machine?
13. What are all Test deliverables?
14. What are the different components or folders which exist at ‘Regedit’ and What is all their purpose and usage?
15. Explain about ‘Web Server’, ‘Web Service’, IIS and security constraints?
16. What is ‘Log file’ in SQL Server?
17. What are the tuning techniques for SQL Server?
18. What is use of ‘Query Profiler?’
19. How can you guarantee when testing a ‘Form’ of saving data to db? Without having any SRS/client’s specifications.
20. Difference between Remoting and Web services and describe each in short.

Software Testing Interview Questions 6

  1. Explain SDLC and STLC?

  1. When will you start the testing process?

  1. What is Test Plan and Test Case?

  1. What is the difference between Integration Testing and System Testing?

  1. If an ATM machine is given to you for testing, how will you move toward to test it and also ensure that right information is updated in the database or not?

  1. If a screen is provided to you and the enter data in that screen is getting updated into one table. What all types of test case you will make and how will you ensure in the database whether the enter data is updated properly or not?

  1. What are Performance Testing, Stress Testing and Load Testing?

  1. What is the hardest test case you have written?

  1. What are the top priority bugs you have found out in your preceding projects?

  1. Explain functional testing?

What are Testing Methodologies.

Software Testing Interview Questions 7

Q. What is Impact analysis? How to perform impact analysis in the project?

A: — Impact analysis means when we are performing regressing testing at that time we are inspecting that the bug fixes are working correctly, and by fixing these bug other components are working fine as per their requirements are they got disturbed.

Q. How to test a web application by manual testing?

A: — Web Testing
During testing the websites the following scenarios should be considered.

Functionality
Performance
Usability
Server side interface
Client side compatibility
Security

Functionality:
In the functionality testing of the web sites the following should be tested.

Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.

Performance:


Performance testing can be carried out to know the web site’s scalability, or to target the performance in the environment of third party products such as servers and middle ware for potential purchase.

Connection speed:
Tested over various Networks like Dial up, ISDN etc

Load


What is the no. of users per time? How many users website can handle at single time.
Check for peak loads & how system behaves.
Large amount of data accessed by user.

Stress


Continuous load
Performance of memory, cpu, file handling etc.

Usability :


Usability testing is the process by which the human-computer communication characteristics of a system are measured, and flaws are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as different to becoming an additional impediment to such success. The broad goal of usable systems is often assessed using several

Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance

Server side interface:
In web testing the server side interface should be tested.
This is done by confirm that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.

Security:


The main reason for testing the security of an web is to make out potential vulnerabilities and then repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection

Performance Testing

Performance testing is a rigorous usability estimate of a working system under levelheaded conditions to identify usability troubles and to evaluate measures such as success rate, task time and user satisfaction with requirements. The objective of performance testing is not to find bugs, but to do away with bottlenecks and establish a baseline for future regression testing.

To conduct performance testing is to connect in a carefully controlled process of measurement and analysis. Ideally, the software under test is already established sufficient so that this process can carry on smoothly. A clearly defined set of expectations is necessary for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time

Load testing:

Load testing is generally defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the server over an extended period of time

Goals of load testing:

Depiction bugs that do not surface in superficial testing, such as memory management bugs, memory leaks, buffer overflows, etc. Make sure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen parallel, their objectives are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking reason and uses a variety of load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning correctly.

Stress testing:

Stress testing is a type of testing that is used to find out the constancy of a given system or entity. This is designed to test the software with nonstandard circumstances. Stress testing attempts to find the restrictions at which the system will fail through nonstandard quantity or frequency of inputs.

Stress testing tries to break the system under test by overpowering its resources or by taking resources away from it (in which case it is sometimes called negative testing).
The main idea behind this madness is to make sure that the system fails and recovers benevolently — this quality is known as recoverability.
Stress testing does not break the system but instead it allows monitor how the system reacts to failure. Stress testing observes for the following.
Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?

Compatibility Testing

A Testing to make sure compatibility of an application or Web site with multiple browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments.

That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be carry out manually or can be determined by an automated functional. The idea of compatibility testing is to disclose issues related to the product & interaction session test suite.

With other software as well as hardware. The product compatibility is estimated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix and any anomalies are investigated to decide exactly where the incompatibility lies.


Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

62. Which comes first test strategy or test plan?

A:– Test strategy comes first and this is the high level document…. and advance. Testing starts from test strategy and then based on this the test lead set up the
test plan….

Q. As a testers point of view what is the difference between web based application and client server application ?

A:– According to Tester’s Point of view——
1) Web Base Application (WBA)is a 3 tier application ;Browser, Back end and Server.
Client server Application(CSA) is a 2 tier Application ;Front End ,Back end .
2) In the WBA tester test for the Script error like java script error VB script error etc, that shown at the page. In the CSA tester does not test for any script error.
3) Because in the WBA once changes perform return at every machine so tester has less work for test. Whereas in the CSA every time application need to be install hence ,it maybe possible that some machine has some problem for that Hardware testing as well as software testing is needed.

63. What is the significance of doing Regression testing?

A:– To check for the bug fixes. And this fix should not disturb other functionality

To guarantee the newly added functionality or existing modified functionality or developer fixed bug come up with any new bug or affecting any other side effect. this is called regression test and ensure already PASSED TEST CASES would not arise any new bug.

64. What are the diff ways to check a date field in a website?

A:– There are different ways like :–
1) you can check the field width for minimum and maximum.
2) If that field only take the Numeric Value then check it’ll only take Numeric no other type.
3) If it takes the date or time then check for other.
4) Same way like Numeric you can check it for the Character, Alpha Numeric and all.
5) And the most significant if you click and hit the enter key then some time page may give the error of JavaScript, that is the big fault on the page .
6) Check the field for the Null value.
ETC…………………

The date field we can check in different ways

Positive testing: first we enter the date in given format

Negative Testing: We enter the date in unacceptable format suppose if we enter date like 30/02/2006 it should show some error message and also we use to check the numeric or text

Software Testing Interview Questions 8

Q. High severity, low priority bug?

A: — A page is not often accessed, or some activity is executed rarely but that thing outputs some essential Data wrongly, or corrupts the data, this will be a bug of H severity L priority

Q. If project wants to release in 3months what type of Risk analysis you do in Test plan?

A: – Use risk analysis to decide where testing should be focused. Since it’s not often possible to test every possible portion of an software/application, every probable combination of events, every dependency, or everything that could go wrong, risk analysis is suitable to most software development projects. This involve judgment skills, common sense, and experience.

Considerations can include:

• Which functionality is most chief to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio

48. Test cases for IE 6.0 ?

A:– Test cases for IE 6.0 i.e Internet Explorer 6.0:—
1)First I go for the Installation part, means that –
Is it working with all versions of Windows ,Netscape or other software(s) in other words we can say that IE must check with all hardware and software parts.
2) Secondly go for the Text Part means that all the Text part appears in regular and smooth manner.
3) Thirdly go for the Images Part means that all the Images come into view in regular and smooth manner.
4) URL must run in a better way.
5) Suppose Some other language used on it then URL take the Other Characters, Other than Normal Characters.
6)Is it working with Cookies frequently or not.
7) Is it in relation to with different script like JScript and VBScript.
8)HTML Code work on that or not.
9) Troubleshooting works or not.
10) All the Tool bars are work with it or not.
11) If Page has Some Links, than how much is the Max and Min Limit for that.
12) Test for Installing Internet Explorer 6 with Norton Protected Recycle Bin enabled.
13) Is it working with the Un-installation Process?
14) Last but not the least test for the Security System for the IE 6.0

Q. Where you involve in testing life cycle, what type of test you perform?

A:– Normally test engineers concerned from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing etc.

Q. What is Testing environment in your company, means how testing process start?

A:– testing process is going as follows
Quality assurance unit
Quality assurance manager
Test lead
Test engineer

Q. Who prepares the use cases?

A:– In Any company apart from the small company Business analyst prepares the use cases. But in small company Business analyst prepares along with team lead.

Q. What methods have you used to develop test cases?

A:– Usually test engineers uses 4 types of methodologies
1. Boundary value analysis
2.Equivalence partition
3.Error guessing
4.Cause effect graphing

Q. Why we call it as a regression test nor retest?

A:– If we test whether defect is closed or not i.e Retesting But here we are checking the impact also regression means repeated times

54. Is automated testing better than manual testing. If so, why?

A:– Automated testing and manual testing have there own advantages as well as disadvantages
Advantages: It boost the efficiency of testing process speed in process
reliable
Flexible

Disadvantage’s
Tools should have compatibility with our development or deployment tools need lot of time at first If the necessities are changing endlessly then Automation is not suitable

Manual: If the requirements are changing endlessly Manual is suitable Once the build is stable with manual testing then only we go 4 automation
Disadvantages:
Time Consuming
We can not do some type of testing manually
E.g Performances

Q. What is the exact difference between a product and a project? Give an example?

A:– Project Developed for specific client necessities are defined by client. Product developed for market necessities are defined by company itself by conducting market survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is project
Product: Example is “Ready made Shirt” where the particular company will imagine particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements

Q. Define Brain Storming and Cause Effect Graphing? With example.

A: – Brain Storming:
A learning technique involving open group discussion intended to expand the range of available ideas.
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-
Power brainstorming meeting is attended by the entire agency staff.
OR


Brainstorming is a highly planned process to help make ideas. It is based on the principle that you cannot generate and calculate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG :
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

Q. Actually by using severity u should know which one u need to solve so what is the need of priority?

A:– I guess severity replicate the seriousness of the bug where as priority refers to which bug should rectify first. of course if the severity is high the same case is with priority in normal.

Severity decided by the tester where as priority decided by developers. Which one need to answer first knows through priority not with severity. How serious of the bug knows through severity.

Severity is nothing collision of that bug on the application. Priority is nothing but weight to resolve the bug yeah of course by looking severity we can judge but sometimes high severity bug doesn’t have high priority At the same time High priority bug don’t have high severity. So we need both severity and priority

Q. What do u do if the bug that you bring into being is not accepted by the developer and he is saying its not reproducible. Note: The developer is in the on site location ?

A:– once again we will check that condition with all causes. Then we will attach screen shots with strong reasons. Then we will clarify to the project manager and also clarify to the client when they contact us

Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we r using is correct we will raise it as defect We will take screen shots and sends them with test procedure also.

Q. What is the difference between three tier and two tier application?

A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name, front end is the monitoring level.

Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server, browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application.

All the client server applications are 2 tier architectures.

Here in these architecture, all the “Business Logic” is stored in clients and “Data” is stored in Servers. So if user request anything, business logic will b performed at client, and the data is retrieved from Server(DB Server). Here the problem is, if any business logic changes, then we
need to change the logic at each any every client. The best ex: is take a super market, i have branches in the city. At each branch i have clients, so business logic is stored in clients, but the actual data is store in servers.If assume i want to give some discount on some items, so i
need to change the business logic. For this i need to goto each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture.

So 3-tier architecture came into picture:

Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture.

Assume for the above. Ex. if i want to give some discount, all my business logic is there in Server. So i need to change at one place, not at each client. This is the main advantage of 3-tier architecture.

Software Testing Interview Questions 9

Q. If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process?
A: — Test case would have detail steps of what the application is made-up to do. SO

1) Functionality of application is acknowledged.

2) In addition you can refer to Backend, I mean glance into the Database. To gain more information of the application

Q. How to execute test case?
A: — There are two ways to execute test case:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation pertaining details.

33. Difference between re testing and regression testing?

A: — Retesting: –

Re-execution of test cases on same application put up with different input values is retesting.

Regression Testing:

Re-execution of test cases on updated form of build is called regression testing…

Q. What is the difference between bug log and defect tracking?
A; — Bug log is a document which keep the information of the bug where as bug tracking is the process.

Q. Who will change the Bug Status as Differed?
A: — Bug will be in open position while developer is working on it Fixed after developer completes his work if it is not fixed correctly the tester puts it in reopen After fixing the bug properly it is in closed state.

Q. What is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most critical functions of a program work, but not worry with higher details. The term comes to software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to find out the simplicity with which a user can learn to work, input, and interpret outputs of a system or component.

Smoke testing is nothing but to make sure whether essential functionality of the build is stable or not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We verify all the fields whether they are existing or not as per the format we check spelling graphic font sizes the whole thing in the window present or not|

Q. What is bug, defect, issue and error?

A: — Bug: — Bug is recognized by the tester.
Defect:– Whenever the project is received for the analysis phase ,may be some requirement neglected to get or know most of the time Defect itself come with the project (when it comes).
Issue: — Client site error most of the time.
Error: — When something is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer.

Bug: a fault or defect in a system or machine

Defect: an imperfection in a device or machine;

Issue: An issue is a major problem that will slow down the development of the project and cannot be determined by the project manager and project team without outside help

Error:
Error is the deviation of a measurement, observation, or calculation from the truth

Q. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the complete functionality of the system or the application whether it is meeting the functional terms

Integration testing means testing the functionality of integrated module when two different modules are integrated for this we use top-down approach and bottom up approach

Q. What sort of testing you make in organization while you do System Testing, give clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

Q. What is the major use of prepare Traceability matrix and clarify the real time usage?

A: — A traceability matrix is formed by associating necessities with the work products that satisfy them. Tests are linked with the requirements on which they are based and the product tested to assemble the requirement.

A traceability matrix is a statement from the requirements database or repository.

Q. How can you do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the simplicity with which users can learn and use a product.

ST:
It’s a Web Testing allows web site ability development.

PT:
Testing to find out whether the system/software meets the specified portability requirements.

Q. What does you mean by Positive and Negative testing & what is the difference between them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with suitable inputs and confirming that output is right


Negative testing: Testing the application functionality with unacceptable inputs and verifying the output.

Difference is nothing but how the application perform when we enter some unacceptable inputs suppose if it accepts invalid input the application Functionality is wrong

Positive test: testing meant to show that s/w work i.e. with applicable inputs. This is also called as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test to fail” BVA is the best example of -ve testing.

Q. What is change request, how u use it?

A: — Change Request is an attribute or part of Defect Life Cycle.

Now when you as a tester find a defect and report to your DL…he in turn informs the Development Team.
The DT says it’s not a defect it’s an extra implementation or says not part of requirement. Its newscast has to pay.

Here the status in your defect report would be Change Request

I think change request controlled by change request control board (CCB). If any changes required by client after we start the project, it has to come thru that CCB and they have to approve it. CCB got full rights to accept or reject based on the project schedule and cost.

Q. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of accessible information to decide how frequently specified events and unspecified events may happen and the magnitude of their likely consequences4

Software Testing Interview Questions

Note: Click on Link to open/visit that Page



Software Testing Interview Question

Software Testing Interview Question 1

Software Testing Interview Question 2

Software Testing Interview Question 3

Software Testing Interview Question 4

Software Testing Interview Question 5

Software Testing Interview Question 6

Software Testing Interview Question 7

Software Testing Interview Question 8

Software Testing Interview Question 9

Software Testing Interview Question 10

Software Testing Interview Question 11

Winrunner Interview Question


Winrunner Interview Question


Software Testing Interview Question 10

Q. Explain bug life cycle?
A: —

New: When tester reports a defect
Open: When developer admit that it is a bug or if the developer neglect the defect, then the status is updated into “Rejected”
Fixed: When developer make modifications to the code to correct the bug…
Closed/Reopen: When tester tests it again. If the probable result shown up, it is turned into “Closed” and if the problem still exists, it’s “Reopen”

Q. What is deferred status in defect life cycle?
A: — Deferred status means the developer accepted the bug, but it is planned to correct in the next build

Q. What is smoke test?
A; — Testing the application whether it’s performing its fundamental functionality correctly or not, so that the test team can go forward with the application.

Q. Do you use any automation tool for smoke testing?
A: - Absolutely can use.

Q. What is Verification and validation?
A: — Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases.

Q. Explain test plan and its contents?
A: — Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

Q. Advantages of automation over manual testing?
A: — Time, resource and Money

Q. What is ADhoc testing?
A: — Doing something which is not planned.

Q. What is mean by release notes?
A: — It’s a document released beside the product which give details about the product. It also contains about the bugs that are in deferred status.

Q. Scalability testing comes under in which tool?
A: — Scalability testing comes under performance testing. Load testing, scalability testing both are same.

Q. What is the difference between Bug and Defect?
A: — Bug: Difference from the expected result.

Defect: Problem in algorithm leads to failure.

A Mistake in code is called Error.

Due to Error in coding, test engineers are getting mismatches in application is called defect.

If defect accepted by development team to solve is called Bug.

Q. What is hot fix?
A: — A hot fix is a solitary, collective package that contains one or more files that are used to tackle a problem in a software product. Usually, hot fixes are made to address a exact customer circumstances and may not be spread outside the customer organization.

Bug found at the customer place which has high priority.

Q. What is the difference between functional test cases and compatibility test cases?
A: —In Compatibility we have no Test Cases. Like we are Testing an application in different Hardware and software.

Q. What is Acid Testing??
A: — ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable

Mostly this will be done database testing.

Q. What is the main use of set up a traceability matrix?
A: — To Cross confirm the ready test cases and test scripts with user requirements.

To observe the changes, enhance occurred during the development of the project.

Traceability matrix is arranged in order to cross check the test cases planned against each requirement, hence giving an prospect to confirm that all the necessities are covered in testing the application.

Software Testing Interview Questions 9

Q. If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process?
A: — Test case would have detail steps of what the application is made-up to do. SO

1) Functionality of application is acknowledged.

2) In addition you can refer to Backend, I mean glance into the Database. To gain more information of the application

Q. How to execute test case?
A: — There are two ways to execute test case:
1. Manual Runner Tool for manual execution and updating of test status.
2. Automated test case execution by specifying Host name and other automation pertaining details.

33. Difference between re testing and regression testing?

A: — Retesting: –

Re-execution of test cases on same application put up with different input values is retesting.

Regression Testing:

Re-execution of test cases on updated form of build is called regression testing…

Q. What is the difference between bug log and defect tracking?
A; — Bug log is a document which keep the information of the bug where as bug tracking is the process.

Q. Who will change the Bug Status as Differed?
A: — Bug will be in open position while developer is working on it Fixed after developer completes his work if it is not fixed correctly the tester puts it in reopen After fixing the bug properly it is in closed state.

Q. What is smoke testing and user interface testing ?

A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most critical functions of a program work, but not worry with higher details. The term comes to software testing from a similarly basic type of hardware testing.

UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to find out the simplicity with which a user can learn to work, input, and interpret outputs of a system or component.

Smoke testing is nothing but to make sure whether essential functionality of the build is stable or not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We verify all the fields whether they are existing or not as per the format we check spelling graphic font sizes the whole thing in the window present or not|

Q. What is bug, defect, issue and error?

A: — Bug: — Bug is recognized by the tester.
Defect:– Whenever the project is received for the analysis phase ,may be some requirement neglected to get or know most of the time Defect itself come with the project (when it comes).
Issue: — Client site error most of the time.
Error: — When something is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer.

Bug: a fault or defect in a system or machine

Defect: an imperfection in a device or machine;

Issue: An issue is a major problem that will slow down the development of the project and cannot be determined by the project manager and project team without outside help

Error:
Error is the deviation of a measurement, observation, or calculation from the truth

Q. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the complete functionality of the system or the application whether it is meeting the functional terms

Integration testing means testing the functionality of integrated module when two different modules are integrated for this we use top-down approach and bottom up approach

Q. What sort of testing you make in organization while you do System Testing, give clearly?

A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

Q. What is the major use of prepare Traceability matrix and clarify the real time usage?

A: — A traceability matrix is formed by associating necessities with the work products that satisfy them. Tests are linked with the requirements on which they are based and the product tested to assemble the requirement.

A traceability matrix is a statement from the requirements database or repository.

Q. How can you do the following 1) Usability testing 2) scalability Testing

A:–
UT:
Testing the simplicity with which users can learn and use a product.

ST:
It’s a Web Testing allows web site ability development.

PT:
Testing to find out whether the system/software meets the specified portability requirements.

Q. What does you mean by Positive and Negative testing & what is the difference between them. Can anyone explain with an example?

A: — Positive Testing: Testing the application functionality with suitable inputs and confirming that output is right


Negative testing: Testing the application functionality with unacceptable inputs and verifying the output.

Difference is nothing but how the application perform when we enter some unacceptable inputs suppose if it accepts invalid input the application Functionality is wrong

Positive test: testing meant to show that s/w work i.e. with applicable inputs. This is also called as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test to fail” BVA is the best example of -ve testing.

Q. What is change request, how u use it?

A: — Change Request is an attribute or part of Defect Life Cycle.

Now when you as a tester find a defect and report to your DL…he in turn informs the Development Team.
The DT says it’s not a defect it’s an extra implementation or says not part of requirement. Its newscast has to pay.

Here the status in your defect report would be Change Request

I think change request controlled by change request control board (CCB). If any changes required by client after we start the project, it has to come thru that CCB and they have to approve it. CCB got full rights to accept or reject based on the project schedule and cost.

Q. What is risk analysis, what type of risk analysis u did in u r project?

A: — Risk Analysis:
A systematic use of accessible information to decide how frequently specified events and unspecified events may happen and the magnitude of their likely consequences4

Software Testing Interview Questions

Note: Click on Link to open/visit that Page



Software Testing Interview Question

Software Testing Interview Question 1

Software Testing Interview Question 2

Software Testing Interview Question 3

Software Testing Interview Question 4

Software Testing Interview Question 5

Software Testing Interview Question 6

Software Testing Interview Question 7

Software Testing Interview Question 8

Software Testing Interview Question 9

Software Testing Interview Question 10

Software Testing Interview Question 11

Winrunner Interview Question


Winrunner Interview Question

Software Testing Interview Questions

Note: Click on Link to open/visit that Page



Software Testing Interview Question

Software Testing Interview Question 1

Software Testing Interview Question 2

Software Testing Interview Question 3

Software Testing Interview Question 4

Software Testing Interview Question 5

Software Testing Interview Question 6

Software Testing Interview Question 7

Software Testing Interview Question 8

Software Testing Interview Question 9

Software Testing Interview Question 10

Software Testing Interview Question 11

Winrunner Interview Question


Winrunner Interview Question

Software Testing Interview Question 10

Q. Explain bug life cycle?
A: —

New: When tester reports a defect
Open: When developer admit that it is a bug or if the developer neglect the defect, then the status is updated into “Rejected”
Fixed: When developer make modifications to the code to correct the bug…
Closed/Reopen: When tester tests it again. If the probable result shown up, it is turned into “Closed” and if the problem still exists, it’s “Reopen”

Q. What is deferred status in defect life cycle?
A: — Deferred status means the developer accepted the bug, but it is planned to correct in the next build

Q. What is smoke test?
A; — Testing the application whether it’s performing its fundamental functionality correctly or not, so that the test team can go forward with the application.

Q. Do you use any automation tool for smoke testing?
A: - Absolutely can use.

Q. What is Verification and validation?
A: — Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases.

Q. Explain test plan and its contents?
A: — Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

Q. Advantages of automation over manual testing?
A: — Time, resource and Money

Q. What is ADhoc testing?
A: — Doing something which is not planned.

Q. What is mean by release notes?
A: — It’s a document released beside the product which give details about the product. It also contains about the bugs that are in deferred status.

Q. Scalability testing comes under in which tool?
A: — Scalability testing comes under performance testing. Load testing, scalability testing both are same.

Q. What is the difference between Bug and Defect?
A: — Bug: Difference from the expected result.

Defect: Problem in algorithm leads to failure.

A Mistake in code is called Error.

Due to Error in coding, test engineers are getting mismatches in application is called defect.

If defect accepted by development team to solve is called Bug.

Q. What is hot fix?
A: — A hot fix is a solitary, collective package that contains one or more files that are used to tackle a problem in a software product. Usually, hot fixes are made to address a exact customer circumstances and may not be spread outside the customer organization.

Bug found at the customer place which has high priority.

Q. What is the difference between functional test cases and compatibility test cases?
A: —In Compatibility we have no Test Cases. Like we are Testing an application in different Hardware and software.

Q. What is Acid Testing??
A: — ACID Means:
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable

Mostly this will be done database testing.

Q. What is the main use of set up a traceability matrix?
A: — To Cross confirm the ready test cases and test scripts with user requirements.

To observe the changes, enhance occurred during the development of the project.

Traceability matrix is arranged in order to cross check the test cases planned against each requirement, hence giving an prospect to confirm that all the necessities are covered in testing the application.

Software Testing Interview Question 11

Q Can you provide me the correct answer for Test Bug?

The idea of testing is to expose defects/Bugs. A defect/BUG is a difference from a desired product characteristic. Two categories of defects/Bugs are- Variance from product specifications- Variance from customer/user anticipation.

Q What is the ONE major factor of 'test case'?

Test case hold the elements like; test case no, test case explanation, expected result, actual result, Status, remark. According to me, the one key element is actual result.

Q Cost of cracking a bug from requirements phase to testing phase - boosts slowly, decreases, increases steeply or remains constant?

Cost of solving a bug from requirements phase to testing phase - increases slowly

Q What is Bug Tracking Process, Reporting, Re-testing and Debugging?

First step : Bug finding

Second Step : Bug Reporting

Third Step : Bug De-bugging(fixing)

Fourth Step : Re verification of the reported bug (Regression)

This is the process of Bug cycle

Q What are the management tools we have in testing?

We have management tools like Test Director, Team rack, Rational Clear Quest, Bug Zilla etc.,

Q What are the test cases prepared by the testing team ?

  1. Functional Unit Teset cases (FUT), 2. Integration Test cases (IT), 3. System Test cases (ST), 4. User Inerface Test cases, 5. Validations

Q Can we write Functional test case based on only BRD or only Use case?

We can write down the test cases using the BRD, but we may not obtain the full flow information and exact functionality of the business from BRD. According to me, we can start writing the functional test cases using the BRD, but we cannot baseline the test cases on the basis of BRD.

Q In an application if i press the delete button it should give an error message "Are u sure u want to delete" but the application gives the message as "Are u sure". is it a bug? And if it is how you would rate its severity.

Severity should be minor

Q Best to solve defects - requirements, plan, design, code / testing phase?

Best to solve Defect is Requirement Phase

Q During the start of the project how will the company come to an conclusion that tool is required for testing or not?

There are lots of things which will decide the Automation tool at the time of Project Initialization...1.Requirements2.Project Budget3.Project size4.Head counts5.Time

Q Tell difference between GUI Testing and Black box Testing?

GUI testing fall under black box testing, where we are going to make sure whether the alignments of the objects are placed correctly or not.. like look and feel we can also state that this is cosmetic testing

Black box testing is testing the overall functionality of a system without any collision on the internal code of an application. here you are going to carry out the usability, GUI, functionality, validation, security, system, performance and user acceptance testing.

Thursday, April 17, 2008

Testing Interview Questions

Software QA and Testing Frequently-Asked-Questions,

What is ‘Software Quality Assurance’?
What is ‘Software Testing’?
What are some recent major computer system failures caused by software bugs?
Why is it often hard for management to get serious about quality assurance?
Why does software have bugs?
How can new Software QA processes be introduced in an existing organization?
What is verification? Validation?
What is a ‘walkthrough’?
What’s an inspection’?
What kinds of testing should be considered?
What are 5 common problems in the software development process?
What are 5 common solutions to software development problems?
What is software ‘quality’?
What is ‘good code’?
What is ‘good design’?
What is SEI? CMM? ISO? Will it help?
What is the ’software life cycle’?
Will automated testing tools make testing easier?

What is ‘Software Quality Assurance’?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to ‘prevention’.

What is ‘Software Testing’?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, ‘if the user is in interface A of the application while using hardware B, and does C, then D should happen’). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should. It is oriented to ‘detection’. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization’s size and business structure.

What are some recent major computer system failures caused by software bugs?

· In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court’s ruling that “…six miscues out of more than 400 trades does not indicate negligence.” was invalidated.

· In April of 2003 it was announced that the largest student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.

· News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.

· In March of 2002 it was reported that software bugs in Britain’s national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.

· A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.

· According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.

· In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date ‘31/12/2000′; the trains were started by altering the control system’s date settings.

· News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn’t work.

· In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district’s CIO was fired. The school district decided to reinstate it’s original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.

· In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.

· Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.

· In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.

· A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.

· In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.

· The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.

· In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.

· January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.

· In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.

· A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software’s inability to handle credit cards with year 2000 expiration dates.

· In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each other’s reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to “…unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers.”

· In November of 1996, newspapers reported that software bugs caused the 411-telephone information system of one of the U.S. RBOC’s to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that ‘It had nothing to do with the integrity of the software. It was human error.’

· On June 4 1996 the first flight of the European Space Agency’s new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.

· Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.

· Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a what he said was a ‘…funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient
China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
“I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords.”
“My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors.”
“My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home.”

Why does software have bugs?

· Miscommunication or no communication - as to specifics of what an application should or shouldn’t do (the application’s requirements).

· Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

· Programming errors - programmers, like anyone else, can make mistakes.

· Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

· Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

· Egos - people prefer to say things like:

·           ‘no problem’ 
·           ‘piece of cake’
·           ‘I can whip that out in a few hours’
·           ‘it should be easy to update that old code’
·          
·          instead of:
·           ‘that adds a lot of complexity and we could end up
·              making a lot of mistakes’
·           ‘we have no idea if we can do that; we’ll wing it’
·           ‘I can’t estimate how long it will take, until I
·              take a close look at it’
·           ‘we can’t figure out what that old spaghetti code
·              did in the first place’
·          
·          If there are too many unrealistic ‘no problem’s’, the
·          result is bugs.
·          

· Poorly documented code - it’s tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it’s usually the opposite: they get points mostly for quickly turning out code, and there’s job security if nobody else can understand it (’if it was hard to write, it should be hard to read’).

· Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?

· A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.

· Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.

· For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.

· In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications or expectations.

What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term ‘IV & V’ refers to Independent Verification and Validation.

What is a ‘walkthrough’?
A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

What’s an ‘inspection’?
An inspection is more formalized than a ‘walkthrough’, typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what’s missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the ‘eldest brother’ in the parable in ‘Why is it often hard for management to get serious about quality assurance?’. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.

What kinds of testing should be considered?

· Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

· White box testing - based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.

· Unit testing - the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

· Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

· Integration testing - testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

· Functional testing - black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)

· System testing - black box type testing that is based on overall requirements specifications; covers all combined parts of a system.

· End-to-end testing - similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

· Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ’sane’ enough condition to warrant further testing in its current state.

· Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

· Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

· Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

· Stress testing - term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

· Performance testing - term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.

· Usability testing - testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

· Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

· Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

· Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

· Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

· Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

· Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

· User acceptance testing - determining if software is satisfactory to an end-user or customer.

· Comparison testing - comparing software weaknesses and strengths to competing products.

· Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

· Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

· Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

What are 5 common problems in the software development process?

· Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.

· Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.

· Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.

· Futurities - requests to pile on new features after development is underway; extremely common.

· Miscommunication - if developers don’t know what’s needed or customer’s have erroneous expectations, problems are guaranteed.