Posted by Ben Simo
Let's not allow the machinery of testing to distract us from the craft of testing.
Over ten years, ago James Bach published his first version of Test Automation Snake Oil.
In this article, James identified eight "reckless assumptions" of the classic arguments for test automation. If we aren't careful, it can be easy to start believing statements based on these assumptions.
Have you made any of these assumptions? Read the article for details. Then take a look at Sam Burgiss' great review.
6 Comments:
May 11, 2007-
Shrini Kulkarni wrote:
-
-
May 12, 2007
-
Pradeep Soundararajan wrote:
-
-
May 13, 2007
-
Ben Simo wrote:
-
-
May 15, 2007
-
Pradeep Soundararajan wrote:
-
-
August 24, 2007
-
TestyRedhead wrote:
-
-
August 24, 2007
-
Ben Simo wrote:
-
-
Ben -
Here is a link
http://shrinik.blogspot.com/2007/05/reliving-taso.html
Shrini
I wonder why the world is still thinking of automation to be a faster approach than manual testing.
Maybe many are bit by wrong snakes and the poison is having a long lasting effect.
The Snake Oil from James is a sure cure to the poisoned ones.
I wonder why the world is still thinking of automation to be a faster approach than manual testing.
Some tasks can be executed faster by automation. The fallacy comes when people think automation is faster because it requires no human intervention. I see two flaws in this. The first is the assumption that automation can be created that does not require ongoing maintenance. The other is the loss of cognitive thought during execution when people are removed from testing.
Automation can be a great tool. I like tools. However, a great tool does not make testing great. Thinking that a tool can make someone a good tester (a claim that some tool vendors make) is like thinking that the right tools will make anyone a great doctor.
Ben
Thinking that a tool can make someone a good tester (a claim that some tool vendors make) is like thinking that the right tools will make anyone a great doctor.
Cool! That means everyone are great pilots :) [ as per tool vendors ]
Darn!
Assuption: We can quantify the costs and benefits of manual vs. automated testing.
Ack! Now I'm in trouble. I can't figure out how to stand up for the manual testing that needs protecting and some balance between the two if I can't somehow quantify and compare. Can't I quantify just the time in vs. bugs that got fixed out total?
I'm looking for anyone who has ever tried this. If it always fails maybe I'd better stop trying.
Testy Redhead
I believe we can compare manual and automated testing -- but only at a superficial level. In comparing them, we discover that they are different things. As James states in the article, they aren't two different ways to do the same thing.
Human testers can do things that automation cannot do. Automation can do things that people cannot do.
Automation has its place just as sapient work by people has its place. Smartly using both people and machines in our overall testing strategy gives us something better than either one without the other.
A human tester that does nothing more than follow a script will do things differently than a computer executing a script. They aren't doing the same thing. They will likely find different bugs. This makes real comparison difficult.
Dot Graham advocates calculating the equivalent manual test effort (EMTE) in terms of execution time for automated execution. This sounds like a good idea on the surface. However ...
We need to consider the manual work involved in coding, maintaining, executing, reviewing results, and investigating errors (testing).
We also need to consider that the automated and human test executors are doing different things.
An automated test that runs in half an hour with an EMTE of 2 hours is not necessarily equal to 2 hours of manual testing. It could be more valuable. It could be less valuable. It could be doing something so different that there is no comparison.
The assertion that we can quantify costs and benefits of anything is testing is only one of many assumptions in our business that frustrate me.
We can't really quantify things that are qualitative. Doing so requires that we dumb down the data used for making decisions. And that can lead us to make decisions based on the wrong things -- often without even realizing it.
Sometimes we have so much data that we need to use quantitative methods to help of focus on specific information. We just can't rely on numbers alone.
Do I buy one model of car instead of another based on counts of features in advertising brochures? Do I buy a car simple because of price? Due to gas mileage? Due to passenger capacity? No. I want to know details of those features. I want to know how the car meets my specific needs. Such decisions are based on quality, not just quantity.
Post a Comment