One of the standard issues every test manager faces, is to have a proof of what is tested and what is not. If I have 300- 400 test cases, I am in full control. But if I have 5000 odd test cases how do I know that the 3617th test case is executed or not ? I trust my testers. Imagine I need to test 200 test cases in 3 different browsers, it multiplies my effort. I can not have 3 testers to do it in three different browsers, nevertheless it must be done. When I need some testing to be done on some critical builds, I do not know, on that day my tester gets illness. He does not turn up to work! My client is waiting to get the status. Oh, what a mess!
4 to 6 weeks from the day of first test execution cycle, my testers get bored of the test cases. Their eyes are not as sharp as before. They feel tired. But they want salary revision! One of my testers claim that he has done 80 test cases since morning I am more than 100% sure that he could not have done that much. How can I be assured of some one did it or did not do it?
The one best answer is automation. Instead of manually executing the test cases, do the testing using a tool. This can solve all the above mentioned problems. Tool never gets tired, tools never get bored, they do not ask salary revisions, they are fast, they do not apply leave and they are consistent!
When it comes to test automation, a tester becomes a developer of automated test scripts. This means, the tester generates code using the tool to test the application. There are a variety of tools available in the market. QTP by HP, SilkTest by Borland, Rational Functional Tester by IBM, TestComplete by SmartBear, Selenium, Ranorex etc. are a few to name. Some tools work only on browser based web apps, some work on only rich/thick clients app and some work on both. But all these tools use the UI of the application to run tests. The human tester uses the UI to carry out functional tests and these tools do the same. Instead of human doing a click, the tool clicks on a button, instead of human typing, the tool mimics the key strokes.
The following are the most common features that almost all tools share.
- Recording (Capture test steps)
- Replaying (Playback test steps)
- Object Identification (knowing the forms and fields attributes on screen)
- Data Driven Test (use same steps with different data sets)
- Check points (Verification points, compare the actual results to expected results)
- Scripting (use a programming language to add intelligence to test scripts)
- File and Database handling (if results are stored on disk)
- Exception handling (recovery path when test script itself fails)
- The number of test cases for my product is large and I have many regression rounds
- My application is a product and not just a 4 months project
- My product needs to be tested on multiple environments for compatibility
- My product is being used by 1000s of customers and we cannot have a single regression issue
- I test my product very frequently, almost everyday
- My team costs me more and more and project bleeds for profitability
No comments:
Post a Comment