Tag Archives: automation effort

Why having versioned documentation


very often we have our project’s piece of code under a versioning system. This has proven to be of real help. It helps the developer to observe the changes, remember the reason behind them and code just for the differences.

The documentation should obey the same rules.

“My taller friend” pointed out that the documentation is split, by functionality, in at least two categories. One would be to describe the characteristics of an entity or process. The second to describe a list of checks to be made in order to validate an entity.

By combining those two we look at the ideal documentation as follows. A dynamical part, with unchecked checks that is included automatically after each clone of the previous version. A static part that is being altered from one version to another.


Real life simplified example:

We have a a folder that can contain an infinite depth of folders and an infinite number of files.

We work hard enough to create a good enough script that validates that each folder has the require image and name based on the documentation provided by the stakeholder.

We copy our folder onto a new location and add a few folders and files.
Now we have two ways of writing some documentation in order to help the automation:

We work hard enough, again, to parametrize the initial script based on the re-written documentation.
We copy the first script and just alter the changes.


The versioned documentation allows the team to adapt faster without over-thinking the technical solution.

I would like to read your opinions,

Page Objects – My friend is not insane

Hello reader,

I am writing this as I am thinking of a work colleague who tries to propose the page objects notion. To get a grip of his effort versus the general acceptance of it imagine Don Quijote versus the windmills.
As in any other post we shall start by understanding the problem at hand.


When we write automated tests, in almost any keyword driven framework, we must implement an action for a selector/locator. This set of key:value will be under a name. This name is specific for an area of a page of the application. If it is a website it will be a webpage, if it is a software program it will be the state of a window. From now on, this will be called a “screen”.

This screen contains multiple key:values, one for each specific action. Please note the “specific” word and think about it. So far we have screens with multiple specific key:values.

On the web, it is very likely that a screen will feature more pages at once. Let’s look at this image:


From this image it is obvious that most of the screens will have the “Header”,”Categories” and the “Footer” more or less present. Some slight changes will occur for the logged/anonymous areas, except that it is always the same. The “Dynamic content” is the actual driver of the screen. This area is the reason why the users are receiving the info.

Another problem is related to the duplicated code. Every time we write something twice there is a 50% chance that on update we forget about the other piece. Also there is a 100% chance of having to work twice to maintain.

The last problem refers to tests versioning. We like to complain that xxx and yyy changed the locators; however we do little to avoid it. If the project is aiming for a difficult release it is very likely that an older backup of the codebase will be kept. This is why our test should be aware of the version required to be run.


How do we keep the functionalities grouped in such a way that we do not have duplicated code and we maintain a versioning system?

My answer:

1) We look around in the project and draw a map, similar to the one I made earlier. Don’t fall into the trap of going too deep with the granularization. I bet that it will take an amount of time unjustifiable to the product manager. This will give us an idea of what is fixed.

2) We create a sketch of the screens and write in that sketch the name of the areas discovered earlier together with their particularities for this step. Those will be our page objects.

3) We add to the sketch the specific actions and validations.

4) In the runner class we receive as many variables as page objects and areas there are. Those variables should start from 1. They represent the version of the scripts.

5) We code each piece of the areas and take into consideration the flags that will cover the states and the version.

6) We code each of the page objects taking into consideration the input data and the version.

7) We go out and drink

This method of automation can be applied under any type of framework and has a very good return of investment. It allows the team to maintain the tests as fast as possible.


It allows the tester, or the PM in case of BDD, to ask questions if the requirements skip areas that used to exist before.

For instance, in the mockups a list can hold 3 products without a scroll bar.  After we look at the tests we can point out the fact that the scroll bar is not present anymore on more than 3 products. This will raise a question that will get clarified into a red scroll bar.

Have fun,


Testing framework – EP 2 – What types of automation are there


now that we have identified the fact that automation is required let’s have a look at the approaches already available.

  1. Code driven testing
  2. Graphical interface testing (GUI)

The code driven testing involves testing the classes, modules and/or libraries.
The graphical interface testing involves the emulation of keyboard and mouse actions. The output is visible on the screen.

How to chose between one and the other?

This question arises because the GUI projects are composed of code but not all the applications have a GUI.

If the project’s code exposure through visual feedback for both the configuration data (Admin) , manipulation feedback (Frontend) and it’s scope is to facilitate the actions of the visitors, the GUI only approach is enough.

If the configuration data is updated through a file (.tsv/.csv etc), some code driven testing is necessary for the input scenarios.

If all the project is an API, for instance, or its scope is to connect two systems, code driven testing is sufficient.

Based on this article we know for sure which way we should head with our automation process.

Testing framework – EP 1 – Why have automation


the purpose of this series is to come up with a definition of what a “testing framework” is based on today’s needs and standards. The project in scope is a big custom Magento implementation.

The release methodology is Agile, the project is split in two teams. One team handles the issues and the client’s needs. The other the new functionalities. The release cycle lasts about 2 weeks. Rarely more, rarely less. If special events occur there are hotfixes in between. Generally they are avoided by both the team and the client.

The QA (Quality Assurance)’s role is to inspect the requirements and provide feedback, write acceptance criteria, write test cases, conduct UAT (User Acceptance Testing), performance testing, report issues and maintain a healthy build. All this, if possible, should be done by yesterday’s evening.

So far so good, pretty much a standard situation in most of the teams that feel the need for automation.

The challenge is to deliver the same amount of work as before in a shorter period of time in order to accommodate the new features if possible without cutting corners.

  • One solution would be to get more QA resources, but this does not fix or improve the process, this just enables a wider bandwidth at the expense of money.
  • Another solution would be to increase a bit the release time or lower the number of tickets. This has some monetary impact on the client and unless the quality is a problem it is very less likely to be accepted.
  • A third option is to implement an automation framework which absorbs some of the tasks allowing the tester and the team to focus on delivering.



The automation framework is the extra kick needed by the team, as soon as possible, in order to deliver better quality in the same amount of time for a long period of time with marginal cost increase while providing adequate documentation.