Posts

Showing posts from September, 2010

‘Test’ to increase confidence in the System

Neither bugs are purposefully introduced in system, nor any test case targets to find a bug. A good testcase is always written to test the functionality and it should be encouraged. A tester should always intend to test the functionality. Bugs are small mistakes and some times result of not properly analyzed stories/scenarios. Moreover each level of testing whether unit or dev box or regression tries to uncover these bugs at different level. So the skill of a tester should lie in writing efficient test cases and he/she should be intelligent enough to prioritize which test cases to be executed earlier. It’s important for a tester to give feedback early. Two sides of test executions are : 1) blindly executing tests which are not likely to find bugs neither developing any confidence in the system 2) executing a set of prioritized test cases which will uncover the functional bug as well as create a confidence in the system. So a tester should be intelligent enough to decide between the

Being a Manual tester isn’t a easy job !

It’s a general perception that manual testing is not a challenging and tough job. Anyone can become a manual tester, I don’t agree! Automation testing can never replace Manual Testing. Manual Testing is an un-paralleled activity in software development life cycle.  With my exposure to this field let me put down some characteristics which I feel a manual Tester must have: Focus: When an application is given to a tester, it happens quite often that he will slip from one feature to another before completely testing one. So a manual tester should be focussed and shouldn’t get drifted easily. Analyzing Skills: A tester need not be an analyst but should possess some analysis skills. He should be good at analysing the application behaviour. Apart from the application, a manual tester needs to analyze the failures/errors and impact of the failure on the system. He also needs to analyse the features before testing and come up with varied set of data required to test. Prioritizing: A

Defect Tracking System – Wastage of Money & Time ???

Image
As a QA I have been logging bugs since 5 years. But what do we do with that ? A typical flow: If this is the only thing we need to do then why taking so much of pain to have a defect tracking system . To log a standard bug (Standard Bug= Broken Functionality), we need to write steps to reproduce, pre-requisites (if any), expected result, actual result, priority, severity, snapshot (if any), Test environment details and finally save it. This takes around roughly 3-5 minutes to log a bug. To say truth, I have never seen any report coming out of defect tracking system which is of use in analyzing the pattern of bug or finding out any trend in them. All I have seen apart from tracking usage, is the summary of bugs with count based on Priority and Severity. Is that the only purpose of Defect Tracking System? If that is the case then why not use a Google spreadsheet rather then investing money of the company and time of QA in a bug logging system. I am sure if we mine the data which i

Selecting Test Automation Tool

I was reading an interesting discussion on some Testing website (discussion thread) which was about commercial testing tool versus the open source counterparts. Just to name a few and revising my own knowledge, some of the prominent software test automation tools are : QTP, LoadRunner, Rational Robot, Silk Performer, TestPartner etc. On the other hand the open source tools are: Selenium, Watir, Sahi, Cucumber,Frankenstein, SoapUI, Watin etc. No offense to the tools which I haven’t mentioned here. :) So when we have such a huge list of test automation tools, the question is how do we decide the test automation tool? It’s a very good question and should be asked always before finalizing the Test Automation plan. Depending on the project, a good test tool is the one which has: Support project’s Technical Requirements Multiple Environment Support Programming language: Easy to learn and use Allows Test Data Management Easy and structured Repor

Identifying Memory Leakage

Image
When I think of Performance Testing, the response time is not the only thing which comes to my mind. Performance Testing should not be done alone to capture the response time. Based on the nature of application and it’s usage, minimally we should run two types of test. One to find out how many concurrent users the system will support for a given response time.  I have really seen a decline in the need of performing these kinds of test. Application Performance is not always about the response time of web service. As a part of Performance Testing, one should also run Soak Test. Soak tests are long duration tests on the system with a static number of concurrent users that tests the overall robustness of the application. The intention of this test is to identify any performance degradation over time through memory leaks, increased GC or any other problem in the system. To get the memory footprints of the application, we need to set up some specific counters in the server where the ap