Sunday, April 27, 2008

A Recorded Framework

Record and playback has a negative stigma. Careful use though can result in something that is usable.

Maximizing reuse is the obvious choice in any framework. With reuse one minimizes the amount of effort required in maintaining the automated scripts. There are a few considerations when looking at a tool to achieve this:
  • Does the tool support calling scripts from within a scripts?
  • Does the tool support another means of calling a script?
  • Can a script be saved as a function and called as a function?
  • Does the tool support a global object map?
  • Does the tool support associated libraries?
These various features reduce maintenance by facilitating reuse. The structure of the scripts needs some consideration.
  • How to pass data?
  • How to handle failure?
  • What granularity should be used in setting up the scripts?
These considerations depend on the capabilities of the tool. The most useful options in dealing with this are to use more advanced frameworks which in general mean that scripts are coded rather than recorded.

Sunday, April 20, 2008

Test Automation Frameworks

There is confusion over what a framework is. So what should one call a framework?

To me a framework is simply an implementation strategy. An idea that is applied to the whole test automation project. I've seen more focused definitions but they don't make much sense. My broad definition may open the door to the reviled record-and-playback, but it can be used as a test automation strategy (although with a lot of care) and so should be included.

The generally accepted primary framework types are:

  • Data driven

  • Keyword driven


There isn't anything fancy about these except the amount of code that needs to be written in order to get them to work. To keep things simple, I'll post separately on each of these types.

Wednesday, April 16, 2008

Reporting in a Technical Area

Reporting where the project is and what you've been doing is a common feature within a development project. Reporting is however not something that's liked since it takes up too much time (among other reasons).

Personally I track what I do in a TiddlyWiki. More than that seems to be redundant. The basic features of a Wiki make it a great tool for tracking what you do on a project. As a notebook it is superb and then adding the cross references (links) and index (tags) it surpasses any solution I have tried in the past.

However good this as a personal solution, it doesn't cover the needs of management. The often rigidly formatted documentation can be a real pain to work through as format constraints often exceed usability.

In consideration of this, as well as finding myself in a position with a sudden need for a status report from others, some reasonable solution had to be found. Among the things I'm experimenting with is a near-agile test automation development process. Of the things within the agile development world I've pulled in are the cycle and the need to plan what to do in a cycle.

My original idea was to use a two week cycle. This did not tie in with the classic development processes adopted outside of our group. As such I've switched to a 1 week cycle which is way too short, but it does mean that there is a synergy between us and the rest of production.

With each cycle is the associated planning for the cycle. The planning is a simple top-down approach - task with sub-tasks. The tasks and sub-tasks are prioritized (high, medium, low) and scoped (short, medium, long). This information forms the basis of the ultimate report.

For now, I've put the planning in an MS Excel file and supported the basic actions with macros. The macros pull the data from the planning cycles into a formatted template. The priority and scoping data is being used to generate an approximation of the actual effort - total and remaining.

This seems to be an effective solution. Planning is a good thing which is being positively reinforced by the least-effort status report. I just need to put some more thought into the effort estimates as well as into some additional efficiency options.

Monday, April 14, 2008

Exporing the QTP OR limitation

The notion of being able to use a reference key to access an object is important from a script creation and maintenance perspective. Sure, the basic operations seem to work fine, but its infinitely less effort to simply change a data file and object map than to do anything with code (as well as mitigating the associated risks in changing code).

In my continued search for a use for the OR, I looked at two possible options. The first was to define the objects within a page with special precursors - like "ed" for an edit field, "ls" for a list field, etc. By using these object names as the data table parameters, I figured that I would instantly have a mapping of sorts.

The first issue I realized that I had not accounted for within this structure was the parent. Sure, I have the child but I have no idea of its parents within this formulation. A simple work around was to pass the parent object to the function to handle the data table processing. Another solution that would work would to have used the entire parent object structure as the identifier - this I didn't try since it would be too ugly. Given the parent object and child name its possible to create the child object using the precursor key. The object definition could be made in the following way:
Select Case right(objName, 2)
Case "ed"
Execute "Set appObj = " & parentObj & ".WebEdit(" & objName & ")"

As a solution this works but having to track the parent object is not exactly the aim of the exercise. Sure using the entire object definition as a column header would work but doesn't make for simple data management not to mention the headache of maintaining the column headers when the application changes.

The next obvious alternative is to define the object with a reference within a dictionary. The reference would be the key and would be used as the parameter in the data table. This is a fairly elegant solution which has the following appearance:
Dim appObjects: Set appObject = CreateObject("Scripting.Dictionary")
appObjects.Add "myEditField", Browser("A Browser").Page("A Browser").WebEdit("myEditField")


A generic means of identifying the type of object so that the correct action is performed on it is a simple task. The need to do this is necessary so that for entering a value one can call the correct function be it Set or Select. One solution is the following:
If InStr(appObj.ToString(), "edit") Then
appObject.Set DataTable(keyString, myDataTable)
Ultimately this works. The issue is that I question why I've defined the object in the OR. Does the OR really save effort over using DP?

Redefining the object within a dictionary just to simplify the scripting process doesn't seem to be logical. If I throw out the OR and use DP then I've one less place for performing maintenance.

I still don't know why you would use the OR. One more possibility remains to be explored and that is the OR object model. A brief look has left it looking as light on the useful side as the OR itself but I'll give it due consideration in time.

Thursday, April 10, 2008

Using QTP with the OR

Quick Test Professional is only an average tool as far as functional regression test tools go. Ultimately the most reasonable use seems to be to bypass the features that cause the tool to cost what it does and to use the the functionality that is less advertised.

Of the so-called great features is the store for the objects that comprise the application. The first version I had my hands on was 8.2. It could store objects but maintaining them was not really a part of the feature set. 9.2, the next version I got to use, had corrected this minor oversight. However, given my earlier experience, I didn't immediately explore the changes. Having a little time, I've now done so and ultimately I'm not sure that its worth it.

The object repository is the central store of objects within a script or across multiple scripts. With a pretty GUI interface and, now at least, editing features. The structure is its downfall. All objects in QTP can only be addressed via the parental structure, viz. Browser().Page().Frame().WebElement(). This is all fine and well and admittedly does enable one to more easily access any element in the page fairly simply, it has a major drawback within QTP's design. You cannot access the element without including its parent structure.

Sure you could argue against this as being a drawback. The reason for it being a flaw is in the simplicity of test case design and scripting that is accorded by being able to simply refer to a user key or reference name. So lets contrast conventional design with simple design...

In conventional design, much like a recorded script, for each page you address each object individually with the action you want to perform, viz.
Browser().Page().WebEdit(1).Set 1
Browser().Page().WebEdit(2).Set 2
Browser().Page().WebList(1).Select 3


This is not detrimental in itself but when you have 20-odd fields on a page and 5 pages to cover within a single script it rapidly becomes a sad idea. Since a data table has to be created anyway and that data table will (or at least should) have column names that correspond to the fields within the application its pretty clear that being able to address the OR entry by the column reference would make the entire script simpler and a whole lot easier to maintain, viz.
AListOfColumns = Array("1", "2", "3")
For i = 0 To UBound(AListOfColumns)
PerformActionOn(AListOfColumns, Value)
Next


Being able to reference an element in the OR via a reference key in such a way is seemingly impossible. So far the closest I've been able to come is to pass the parent object along with the reference object string to the PerformActionOn function.

The stumbling block seems to be the identification of the parent when you only have the child. Yes its possible to create an external map of the object using a reference key to its entire parent-child relationship but that defeats the object of this current exercise.

Wednesday, April 9, 2008

Manifestations of Word

MS Word 2003 is a departure from sanity.

Must irritating must be the uncontrolled style growth. How do you manage several styles that are all slight variations of what you intended to use and would use if only the style wouldn't keep changing on you. Given that I could adequately use the style settings in 97 this ''feature'' in '03 has to be step backwards.

Today's annoyance arose out of simply trying to have table headers cross pages... highlight the row, select the option in the Table menu and ... nothing. Even after figuring that common sense was an inadequate approach and hitting Google to see what was there resulted in no better results.

Turns out that having a page width table set to have text wrap around the sides, prevents the table header from being carried across the pages.

LaTeX is my text editor of choice.