Test automation is usually implemented in a bulk manner - more tests, faster, get coverage. What you ultimately get is a huge load of tests with a maintenance nightmare and possibly dodgy execution. Even the leaner approach of only automating what makes sense doesn't remove the underlying issue.
It is only with a smarter implementation strategy that one can achieve the purported benefits of test automation. An engine provides a stack of reusablity. Reusable components have been vouchsafed within software programming as being an ideal since the advent of the first function.
While the marketers try to spin the opposite story, test automation is effected by writing a program to test a program. Why then are the basic premises of good software design so commonly avoided in test automation?
I will continue to advocate an approach to test automation that results in a fitter test suite than a test suite bulking program.
SWIFT has been a background idea. Ultimately what is needed is a means of generating a message with known input as well as checking a received message against some expectations. After running into WIFE last year I figured that the process was imminently simplified. Time, however, can be a nasty thing.
With two projects boiling away with furious vigor, it has become time to get the basic SWIFT message validation tool written. This is not due to father time having changed his ways but both systems I'm working with showing an impending need for it.
I must say there's been an auspicious start... the OS library hadn't been touched in a while. Not much to fear though since SWIFT doesn't change all that often. False confidence on my part. After popping a sample into Eclipse all that would return was a simple class not found error.
Yay for open source in that I could pop the source into a project and try figure it out. Turns out that there was a dependency on Java 1.4. Ultimately a simple update to the Java version and the classes can no longer hide from the compiler.
So now I just need to figure out how the library works... It would probably help to get a good understanding of how SWIFT messages are put together before doing this, but the clock is ticking. Heck now that I have undone my assumption that the newlines between tags within the message fields are simply there for readability the way forward is looking almost too good to be true.
Image via WikipediaHP's or ex-Mercury Interactive's Quality Center is a test management tool. This means it provides a central storage area for test requirements, test cases, test execution records and defects. Great huh? Well it might be but for the fact that testing is meant to identify as many bugs as possible within an application. The problem with all test management tools is that there is a focus on the management aspect with a loss to the target purpose of performing testing. This is a wonderfully general statement.
Today I was asked to look at a customization bug in QC. I sympathize with developers... how can a tester write a bug report without explaining the problem. I actually had input from four different people about the problem and in the end still couldn't tell anyone what the problem was. In the end I looked at the project and tried a few things and my best guess was that the field change rule on a new defect was not being activated while it was being activated on editing a defect.
Having at least found a problem... I then looked at the script. Having helped out before I knew that I was heading into a realm of badly coded hacked together copy-paste nightmares. It may have a VBScript backend but that is no excuse. Today's function was the worst I've seen. The entire block of code under the new defect customization was a copy of the field change customization. Besides the fact that the fields are blank and so changing the some entries to prettier strings is a futile exercise, all the code is hidden within a "one error resume next" block.
Herein lay the problem. The problem I uncovered was another instance of attempting to set a value in a field based on the entry in another field which is blank and has to be on a new defect and so is happily caught by the on field change customization. I caught the error by simply including the error description in a message box. Fixed the error and ran it again and received a different error. This second error was the annoying one. A half hour later with everything in the function commented out and an error reset command prior to the msgbox, I was still getting the error.
My conclusion is that the function customization in QC are not scripted through a clean interface as they trigger an object does not support that function error. So much for knowing whether or not the code has problems.
Image by James Gordon via FlickrI'm unclear as to what my actual interpretation of the value of testing is but I need to get some thoughts out.
Software testing does not have a measurable deliverable in the sense of a tangible product. It basically provides a potential of value. This potential can be accorded to the following:
An in depth understanding of the system which might not be available within any other development team
Insuring the expectation of the system's potential
Identifying additional uses of the system
You don't buy insurance for the return on investment but rather as a means of being able to cover yourself should the insured item become in some manner impaired or absent. It should thus be somewhat reasonable to see the cost of testing as being similar to the purchase of an insurance policy. The value insured, is where the question re-arises. An insured item has a price and it is reasonable to expect some sort of guarantee against the cost. In insurance the guarantee is only as good as the company holding the policy. Would anyone put a guarantee on the state of the software after it has been through the testing cycle?
To come back to the insured value. To obtain an insured value, one could look at the business requirements. A conventional technique is to prioritize the functions. The prioritization process in itself will identify those areas that add the most critical to the business. It is likely that these processes are the ones that add the most value and thus need the most insurance (or testing). Using some weird weighting-to-tester cost to company figures it's intuitively obvious that a dollar value can be generated for the value the testers are there to insure.
From this scheme it can be noted that various supplementary metrics can be applied against the generated dollar amount. These metrics could provide business level feedback of the cost a bug represented had it not been found (using weighting to cost and fudged with severity). Possibly interesting to establish a basic formula for doing this - also if it might even be remotely useful outside of a theoretical obscurity (or just being hilarious).
Information on a system is a long term benefit that could be idealized only by the establishment of a software testing department with effective retention policies. Cross system or domain knowledge can be considered to the be an additional value brought in by test team. This would be a fuzzy value that equates to reduced time to idealization rather than a physical amount. It could also be interpreted as improving the quality of the test effort using past experience. I'm not sure that some scheme could be devised to measure this in terms of a dollar value...