There are 4 level of Testing:
- Unit Testing
- Integration Testing
- System Testing
- User acceptence Testing
There are 4 types of testing:
- Sanity Testing is a test whether build is testable or not.
- Unit testing, Integration Testing, System Testing
- Regression Testing
- Final Regression testing or Postmortem Testing
There are five main stages of SDLC:
Requirement: In this phase a techno team take a requirements form a customer by meeting,what they actually want or what they need in there product.
Analysis: In this requirements are converted into documents and covers all the customer requirements called FRS (functional requirement specifications), Finally it will approved by head or any senior persons of the customerside, After approvel the requirements are nail down and the devlopeing process is start right from there.
Design: In this phase the design of the product is prepared i.e all the requirements are converted into the architechture design.(SRS : Software resuirment specification is prepared) .
This phase includes :
- LLD – Low Level Design Documentation:This level deals with lower level modules.The flow of diagram handled here is data Flow Diagram.Developers handle this Level.
- HLD – High Level Design Documentation: This level deals with higher level modules.The flow of diagram handled here is ER – Entity Relationship.Both Developers and Testers handle this Level.
Coding:In this phase all the requirements of the customer are converted into the code form.
Testing:In this phase the software under devlopement is tested for quality of the product,that the product we are builting is error free or a quality product.
This phase incudes 2 types of Testing:
i. Static Testing : Testing each and every phase completely is called as Static testing.It is also called as Reviews.
ii. Dynamic Testing : Testing after the competion of the entire project .
Maintance:In this phase the maintance of the product is carried out.
Tester’s work starts form the initial stage of the SDLC. i.e. starts with requirement gathering and analysis.
At this stage tester starts reviewing the documents and try to find the ambigious requiremnets or the requirements that could not be fulfilled.
- As I know sanity and smoke testings are different ….smoke testing is test whether the build is installed properly or not and is ready for further major testing.
- Sanity testing is carried after smoke testing to check whether the major functionality is working properly or not to proceed further testing.
Generally this type of Testing is done after all types of testing is done. In this type of testing neither any documents nor any test cases are followed, just it is done randomly to get defects.
Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.
Software Quality Assurance involves the entire software development PROCESS – monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.
In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.
It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.
Miscommunication or no communication – understand the application’s requirements.
Software complexity – The complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
Programming errors – programmers “can” make mistakes.
Changing requirements – A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
Time pressures – scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
Poorly documented code – it’s tough to maintain and modify code that is badly written or poorly documented; the result is bugs.
Software development tools – various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
Finding a bug consists of number of steps that are performed: Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed
When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.
Requirements are the details describing an application’s externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, ‘user-friendly’ (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
A common problem and a major headache.
It’s helpful if the application’s initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Testing is a process which identifies correctness, completeness and quality of software.
- Test break attitude.
- an ability to take the point of view of the customer,
- a strong desire for quality,and an attention to detail.
- Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers)and non-technical (customers, management) people is useful.
- Also they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization
The Test Engineer’s function is to use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they’ve achieved the desired level of quality
Enhancing this Test engineer should:
- Create test cases, procedures, scripts and generate data.
- Execute test procedures and scripts, analyze standards of measurements, evaluate results of system / integration / regression testing.
- Speed up the work of the development staff;
- Reduce organization’s risk of legal liability;
- Give you the evidence that software is correct and operates properly;
- Improve problem tracking and reporting;
- Maximize the value of software;
- Maximize the value of the devices that use it;
- Assure the successful launch of product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;
- Help the work of development staff, so the development team can devote its time to build up product;
- Promote continual improvement;
- Provide documentation required by ISO, CMM, FDA, FAA, other regulatory agencies and requested by customers;
- Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field; Save the reputation of company by discovering bugs and design flaws;
- Before bugs and design flaws damage the reputation of company.
Unit Testing is a method of testing that verifies the individual
units of source code are working properly.
- Integration testing is kind of black box testing done after Unit testing.
- The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements after integrating the units.
- Test cases are developed with the express purpose of exercising the interfaces between the components. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input
White box / Clear box testing is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.
System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.
The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level
Alpha testing is testing of an application/project whendevelopment is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers
Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.
Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.
Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools
Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.
- Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail
- Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads
Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic functionality to demonstrate proper implementation
A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria
This ratio is not a fixed one, but depends on what phase of the software development life cycle the project is in.
When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers.
In sharp contrast, when the product is near the end of the software development life cycle, this ratio tends to be 1:1, or even 1:2, in favor of testers.
Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly.
Test case templates contain all particulars of every test case:
- Test case No
- Test Case ID
- Test Description
- Test Precondition
- Test Procedures/Steps
- Test Case code
- Expected Result
All documents should be written to a standard and template. Standards and templates maintain document uniformity.
Software test report are in a document that describes out put of tested actions, or events, and the version/label, in order to determine if all features of an application are working correctly.
Test report templates contain all particulars like:
- FRS version / unique reference
- Functionality / Feature
- Test case ID
- Test Inputs
- Test Steps
- Expected Outputs
- Test Result
- Remarks / Observed outputs / comments
- Developers Response on obtained output Also
- Rounds of testing mentioning Label / Version no with Date,
- Tester Name & Effort taken
Some of Common Testing Tools:
Quick Test Professional (QTP) is an automated functional Graphical User Interface (GUI) testing tool that allows the automation of user actions on a web or client based computer application.
WinRunner, Mercury Interactive’s enterprise functional testing tool. It is used to quickly create and run sophisticated automated tests on your application. Winrunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run- enabling you to detect and ensure superior software quality.
LoadRunner is a performance and load testing product for examining system behaviour and performance, while generating actual load. LoadRunner can emulate hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads, while collecting information from key infrastructure components (Web servers, database servers etc).
TestDirector Its four modules Requirements, Test Plan, Test Lab, and Defects are seamlessly integrated, allowing for a smooth information flow between various testing stages. The completely Web-enabled TestDirector supports high levels of communication and collaboration among distributed testing teams, driving a more effective, efficient global application-testing process.
Silk Test is a tool specifically designed for doing REGRESSION AND FUNCTIONALITY testing. Silk Test is the industry’s leading functional testing product for e-business applications, whether Window based, Web, Java, or traditional client/server-based. Silk Test also offers test planning, management, direct database access and validation, the flexible and robust 4Test scripting language, a built in recovery system for unattended testing.
RT RT: Cross-platform solution for component testing and runtime analysis. Designed specifically for those who write code for embedded and other types of pervasive computing products. Supports safety- and business-critical embedded applications.
- Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings.
- Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Supprose If X Version As 15 Bugs Found And It Is Fixed, And How Will Come To Know In Next Version Say Z Version Those Are Fixer Or Not?
Interviewee is expecting about release note at this point. Because for Z version in the release note they mentioned those are fixed.