Category Archives: Development and QA

Spring Boot, Docker integration with IntelliJ and Windows

In this post I’ll give a more detailed guide for the Spring Boot and Docker initiative.

The final goal that I had in mind, was to have a Docker container that runs the packaged java application in a clean environment.
Support for remote debugging is included in the configuration, as well with both running and debug listening ports mapped/forwarded between the windows and docker machine. So in the end you can access the application as if it was running on your “localhost” 😀

Pre-requisites:
– Docker ToolBox for Windows (with VirtuaBox) Dowload link
– A Spring Boot or any Java project with Maven support will suite the purpose Getting Started with Spring Boot
– IntelliJ Docker plugin installed

After installing the Docker Toolbox (next > next > next…) we can jump right into creating a simple HelloWorld Spring Boot Web Starter project.
To do this in the fastest way, create a New Project > Maven and inside the pom.xml add the following dependency:

1
2
3
4
5
6
7
8
9
10
11
12
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.3.3.RELEASE</version>
    </parent>

   <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>

Next, add a “hello” package and a java class that will act as our Rest Controller (HelloController.java) in which we map the default “/” context path to a method that returns a default “Hello” String.

1
2
3
4
5
6
7
8
9
@RestController
public class HelloController {

    @RequestMapping("/")
    public String index() {
        return "This is Spring Boot with Docker!!";
    }

}

Create another class, with a main method that will be the entry-point to our application. The @SpringBootApplication makes everything smooth :)

1
2
3
4
5
6
7
8
@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

This is my trimmed version of the “Getting Started with Spring Boot” from the official site.
You can go ahead and run “mvn package” command to generate the application’s executable .jar file.

Now for the Docker part, let’s start with installing the IntelliJ “Docker integration” plugin. This offers easier communication with the docker and saves some time, cause you won’t have to spam “docker build” … “docker run” commands in a separate console.

In the Maven pom.xml file, you’ll need to add the following docker-maven-plugin along with a few very important configuration lines.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>0.4.5</version>
                <configuration>
                    <dockerHost>https://192.168.99.100:2376</dockerHost>
                    <imageName>spring</imageName>
                    <baseImage>java</baseImage>
                    <env>
                        <server.port>8888</server.port>
                        <java.security.egd>file:/dev/./urandom</java.security.egd>
                    </env>
                    <exposes>
                        <expose>8888</expose>
                        <expose>5005</expose>
                    </exposes>
                    <entryPoint>["java", "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005", "-jar", "/${project.build.finalName}.jar"]
                    </entryPoint>
                    <resources>
                        <resource>
                            <targetPath>/</targetPath>
                            <directory>${project.build.directory}</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                </configuration>
            </plugin>

In the configuration we are able to specify the baseImage and imageName along with the expose and entrypoint parameters which are very common in a Dockerfile. In our case, we won’t be needing one at all, because the maven plugin will do all the work for us!

The environment variables are used to specify the server.port to run on 8888 instead of the default 8080 (yes, I’m still using Skype…)
The ports also need to be exposed from the docker container so both the server and remote debugging ports are added.
Entrypoint is basically the whole java command used to run the server, in our case the resulting “.jar” file after mvn package. It contains some extra arguments to enable remote debugging for the server.

In order to make use of this plugin we need to run “docker:build” from maven/plugins section.

Docker Build mvn
After it’s finished we will have a new “spring” image derived from the default Java one. Now here comes the Docker and IntelliJ integration part.

Docker plugin

We will go ahead and create a new Docker Deployment configuration as follows.
Docker Run configuration

We can then press Play and our container will be started along with the application server. In order to be able to access the server from Windows, we need to forward the 8888 and 5005 port respectively, from the docker machine to running Windows OS (this isn’t necessary if you’re a Linux guy, because the docker-host is running natively).

To do this we can use the following commands from an “elevated windows command prompt” (just run it as Administrator):

1
2
netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=8888 connectaddress=192.168.99.100 connectport=8888
netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=5005 connectaddress=192.168.99.100 connectport=5005

Everything is set and we can now access our application deployed inside the docker container from windows: http://localhost:8888

You can find the code here: SpringDocker Project

Hope you enjoyed this tutorial,
-M.

Soap WebService testing using Stub generation

In this tutorial I will show an easy way on how to generate Stubs needed to call SOAP webservice methods from Java (or in this tutorial, with Groovy).

The SOAP webservice that I have in mind for this demo is the free Global Weather WebService and we will be needing its WSDL:

1
http://www.webservicex.net/globalweather.asmx?WSDL

Will now go ahead and create a new Maven project in IntelliJ IDEA 15 and then Add Framework Support and select check the Webservice client.
You can also find a link to the GitHub project at the end of this post.

Before generating the stubs, we need a few dependencies to be added in the pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<dependencies>
        <dependency>
            <groupId>org.apache.axis</groupId>
            <artifactId>axis</artifactId>
            <version>1.4</version>
        </dependency>

        <dependency>
            <groupId>axis</groupId>
            <artifactId>axis-wsdl4j</artifactId>
            <version>1.5.1</version>
        </dependency>

        <dependency>
            <groupId>axis</groupId>
            <artifactId>axis-jaxrpc</artifactId>
            <version>1.4</version>
        </dependency>

        <dependency>
            <groupId>commons-discovery</groupId>
            <artifactId>commons-discovery</artifactId>
            <version>0.5</version>
        </dependency>

        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <version>2.4.5</version>
        </dependency>
    </dependencies>

What we need to do now is right click on a folder in our project and select the last option WebServices -> Generate Java Code from Wsdl

IDEA

After the Stub generation is finished, we will find the following files inside the stubs package

IDEA Project Structure

We are aiming to be able to call the methods of the global webservice:

1
2
public String getWeather(String cityName, String countryName)
public String getCitiesByCountry(String countryName)

but in order to do that, we need to get ourselves an instance of the GlobalWeatherLocator and furthermore, access the GlobalWeatherSoap_PortType object which has the 2 methods that we’re interested in.
A quick note here: Don’t be fooled by the “String” return type of these 2 methods, the String is actually … XML so we’re going to need a XmlSlurper to parse it and perform asserts.

Using Groovy, this code will look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
import stubs.GlobalWeather
import stubs.GlobalWeatherLocator
import stubs.GlobalWeatherSoap_PortType


GlobalWeather globalWeather = new GlobalWeatherLocator()
GlobalWeatherSoap_PortType globalWeatherSoap = globalWeather.getGlobalWeatherSoap()

def xmlResponse = globalWeatherSoap.getWeather("Bucuresti", "Romania")

def parsedXML = new XmlSlurper().parseText(xmlResponse)
assert parsedXML.Status == "Success"

The code for the project can be found on GitHub – WebServiceClientStubs

Hope you’ve enjoyed generating some stubs.

Selenium TestNG – Customizing tests at runtime with IAnnotationTransformer

Recently I had a request to have the possibility to choose which @Test to run from an external file that is human readable and easily configurable thus testng.xml was out of the question.

My first thought was: this should be easy, we can just pass a true or false value from an external file, to the “enabled” test method attribute. This approach quickly failed since the annotation’s attributes seem to accept only constants, so for “enabled” it’s either true or false, not a “superposition” of both :)

So I started doing a little digging and ultimately found the ITestAnnotation Transformer interface:

1
2
3
4
public interface IAnnotationTransformer {
public void transform(ITest annotation, Class testClass,
Constructor testConstructor, Method testMethod);
}

The transform() method will be executed before any of the annotated methods/classes in our code and when we implement it we have the opportunity to use setters to alter the annotation attributes.

Let’s take the following example with 3 tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
package testthis.selenium;
import org.testng.annotations.Test;
public class SelectTestToRun {
@Test(testName = "test1", enabled = false)
public void testing() {
System.out.println("test1 run");
}
@Test(testName = "test2", enabled = false)
public void testing2() {
System.out.println("test2 run");
}
@Test(testName = "test3", enabled = false)
public void testing3() {
System.out.println("test3 run");
}
}

As you can see we initialize the tests with the enabled attribute set to false and also give them a testName. In order to have a test running, we will have to set it to true from the transform() method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package testthis.selenium;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;
import java.util.Properties;
import org.testng.IAnnotationTransformer;
import org.testng.annotations.ITestAnnotation;
public class AnnotationTest implements IAnnotationTransformer {
Properties props = new Properties();
public void transform(ITestAnnotation annotation, Class testClass,
Constructor testConstructor, Method testMethod) {
try {
props.load(new FileInputStream("configure.properties"));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
if (props.values().contains(annotation.getTestName())) {
annotation.setEnabled(true);
}
}
}

Now here was the tricky part. This needed a way to differentiate between our test methods, because annotation.setEnabled(true) will cause all the @Test methods to run.
So we had to perform a check if among the values read from the properties file, there are any testNames matching. If so, then that @Test will have its enabled attribute set to “true”.

To make this easier, we will name the keys inside the properties file with the same value as the testNames and if we have the same value, then the test is run.

1
2
3
4
key=value
test1=test
test2=test2
test3=test3

For this to work, we have to Run As TestNG Suite, from the testng.xml and also pass a listener class, in our case, it is the fully qualified name of the AnnotationTest class.

Now let’s have a look at the results:

1
2
3
4
5
6
7
8
[TestNG] Running:
C:\Users\mdima\workspace\SeleniumTest\testng.xml
test2 run
test3 run
===============================================
Suite
Total tests run: 2, Failures: 0, Skips: 0
===============================================

Since we passed something else rather than the testName of the first test, test1() method was not run.

So we can now easily customize which tests to run from the properties file, without touching the Test methods, as long as we’re using the testng.xml and therefore this is also useful for the maven-surefire-plugin in a maven project.

Enjoy TestNG customization,
-M.

Test execution priority in Selenium TestNG

Hello everyone,

Few days ago I was taking a look over a class which contained many @Test methods (from org.testng.annotations of course :) ) and whenever it was running, I was getting a different order of the tests in the results’ summary.

So this got me thinking and a quick solution came to mind: test dependency (dependsOnMethods / dependsOnGroups) but this would then imply that if the initial test fails, the execution would stop right then and there. I then found out about test method priority and it seems to be exactly what I was looking for and very easy to use.

Given the following example let’s see how we can obtain a desired order of execution: A > B > C

1
2
3
4
5
6
7
8
9
10
11
12
13
14
    @Test(priority = 1)
    public void testC() {
        System.out.println("Test C");
    }

    @Test(priority = 2)
    public void testA() {
        System.out.println("Test A");
    }

    @Test
    void testB() {
        System.out.println("Test B");
    }

In the above case, we get this order of execution: B > C > A which isn’t exactly what we wanted, let’s see how we can change that.

A few things need to mentioned about test method priorities:

  • lowest priority (can also be negative) will be executed first, thus -5 will run before priority 1
  • when no priority is specified, the default priority is 0, in our example, testB has the default priority

Since we’re aiming to obtain the A -> B ->C flow, we should use, the lowest priority for testA and set B and C to 1 and 2 respectively:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
    @Test(priority = 2)
    public void testC() {
        System.out.println("Test C");
    }

    @Test
    public void testA() {
        System.out.println("Test A");
    }

    @Test(priority = 1)
    void testB() {
        System.out.println("Test B");
    }

SoapUI – Parameterizing Requests in the free version

Hello,

This post will show how to send values from an external file to specific WSDL tags as arguments. In this way if you have to perform combinatorial testing with let’s say, 50 different requests, instead of creating individual SOAP requests, you’ll only need one and a (rather big) testdata file.

For this example we’ll be reading the testdata from an Excel file so you should first make sure to place the Apache POI jar file in the “SoapUI\bin\ext” folder. This is the content of the Excel file:

City Name Zip
New York 10001
not found 33333
Minneapolis 55555
BeverlyHills 90210

 

Now open SoapUI and create a new project using a WSDL file such as the famous weather one:  http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL

We’re now going to Generate a TestSuite by right-clicking the WeatherSoap interface and selecting the GetCityForecastByZip operation. In the TestCase and open TestSteps and add a new step of type “Properties”, name it something like “property-loop” open it and insert a new property named “Zip”. Now open the Soap Request (GetCityForecastByZip) and right click on the question mark “?” and select GetData > property-loop> Zip. It should look something like this:

SoapUI-Property

We have the request mapped to the property Zip value but now we need to also pass the data from the excel to the property. We can do this by using a Groovy Script so let’s get groovy 😀

Add a new test step select Groovy Script and insert the following code for reading an Excel file:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import java.io.*

File excel = new File("D:/data.xls"); // path to the Excel file

FileInputStream fis = new FileInputStream(excel);
HSSFWorkbook wb = new HSSFWorkbook(fis);
HSSFSheet ws = wb.getSheetAt(0); // get the first Sheet of the Excel file

int rowNum = ws.getLastRowNum() + 1;
int colNum = ws.getRow(0).getLastCellNum();
propTestStep = context.testCase.getTestStepByName("property-loop") // get the Property step (ours is named "property-loop")

    for (int i = 1; i&lt; rowNum; i++) {
        HSSFRow row = ws.getRow(i);
        for (int j=1; j &lt; colNum; j++) {
            HSSFCell cell = row.getCell(j);
            String value = cellToString(cell);

            log.info ("The value is " + value);
            propTestStep.setPropertyValue("Zip", cellToString(row.getCell(1))) //set the value of "Zip" property equal to Excel's column B ( getCell(1) )

        }
        testRunner.runTestStepByName("GetCityForecastByZIP"); //we're going to run the Soap Request after each iteration of the Excel's rows.
    }

    //a helper method to convert the Cell's value to String
    public static String cellToString(HSSFCell cell) {
        int type;
        Object result;
        type= cell.getCellType();

        switch (type) {
            case 0 : //numeric value in Excel
                result = String.valueOf((int)cell.getNumericCellValue());
                break;
            case 1 : //String value in Excel
                result = cell.getStringCellValue();
                break;
            default:
                result = cell.getCellType();
                throw new RuntimeException("There is no reader for this type of Excel data");
        }
        return result.toString();
    }

So what the above script does is for each row in the Excel file, we’re passing the value from the Column B to the Property field named “Zip” and then run our SoapRequest.

We can also disable the property and the soap request so that our TestCase only executes the Groovy Script which calls the request using testRunner.runTestStepByName(“GetCityForecastByZIP”).

Hope you’ll find this helpful,
-M.

Why having versioned documentation

Heya,

very often we have our project’s piece of code under a versioning system. This has proven to be of real help. It helps the developer to observe the changes, remember the reason behind them and code just for the differences.

The documentation should obey the same rules.

“My taller friend” pointed out that the documentation is split, by functionality, in at least two categories. One would be to describe the characteristics of an entity or process. The second to describe a list of checks to be made in order to validate an entity.

By combining those two we look at the ideal documentation as follows. A dynamical part, with unchecked checks that is included automatically after each clone of the previous version. A static part that is being altered from one version to another.

 

Real life simplified example:

We have a a folder that can contain an infinite depth of folders and an infinite number of files.

We work hard enough to create a good enough script that validates that each folder has the require image and name based on the documentation provided by the stakeholder.

We copy our folder onto a new location and add a few folders and files.
Now we have two ways of writing some documentation in order to help the automation:

We work hard enough, again, to parametrize the initial script based on the re-written documentation.
OR
We copy the first script and just alter the changes.

 

The versioned documentation allows the team to adapt faster without over-thinking the technical solution.

I would like to read your opinions,
Gabi

To Scrum or not to SCRUM – this is the question

Hello,

one of the most used words today is “scrum”. All caps or not, it does not really matter. The baseline is that it is not an acronym.

Now, after working in this framework for a while, with various better or worse implementations I came to the conclusion that up to a point it is better to have the QA in charge of the “Scrum master” role. Here’s why:

  • In the early stages of the process, QA usually writes the acceptance criteria which get signed by the client
  • During the development QA must ease the communication between the developers and the product owner

Why the first?

Because they are the ones that later on in the process must verify that the deliverable covers all the expectations. Else the clients would get an image as a website while the team will strongly state that it looks exactly the same. This doesn’t mean that it functions, or if it does that it functions well.

Why the second? Why can’t a developer handle the task?

The QA person is the one handling the status of the tickets once they are considered complete and in testing. The developer must dig for this information while the QA can’t avoid it. So, the dev can but it is extra work.

Why not have the Product Owner in charge of it?

1) If the client is “many” and “decision-challenged”. The product owner brings way more to the project by just listening to the client and filtering the information for the team. This filters most of the noise. To his extent comes the QA that delivers his the required client-facing information.

2) If the client is “one” and “decision-challenged”. the product owner should filter the incoming information and extract the client-facing information by its own. In this case I do not consider that the QA should handle the “Scrum master” role. However in a team of 2 complementary senior QA resources, the more soft-skills oriented one can cover this role as well. He/she already have visibility regarding the team’s workload and there shouldn’t be 6 hours of decisions every day.

3) If the client is “one” and the decisions don’t change. This is quite ideal. Once the acceptance criteria are agreed upon there is no need of a product owner all-together. The team should be productivity driven with the final goal always visible.

Personal opinion:

Since the scrum is founded on the premises of a holistic approach (treat the team as a whole, not as individuals) and the client is the input and output, the QA should be the filling and the Product Owner should be the shield around it.

Please disagree and let’s have a chat.

Gabi

Selenium DDT with TestNG and DataProvider

Hi there,

Of course there are many ways to do Data Driven Testing but if you’re familiar with Java or OOP in general you’re going to like this approach with TestNG and DataProvider.

First let’s give a short intro to @DataProvider; this annotation will require a name and holds a method that returns either an Object[][] or an Interator

For the scope of this article, we will try to login multiple times on a particular site, such as gmail.com with the username and password combinations provided in an “input.txt” file:


1
2
3
user1, pass1;
user2, pass2;
user3, pass3;

Now let’s see the code for our custom readText() method and review it based on the particularities of the above input.txt file:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
    public static Object[][] readFile(File file) throws IOException {
        String text = FileUtils.readFileToString(file);
        String[] row = text.split(";");

        int rowNum = row.length;
        int colNum = row[0].split(",").length;

        Object[][] data = new String[rowNum][colNum];

        for (int i = 0; i &lt; rowNum; i++) {
            String[] cols = row[i].split(",");
            for (int j = 0; j &lt; colNum; j++) {
                data[i][j] = cols[j].trim();
                System.out.println("The value is " + cols[j].trim());
            }
        }
        return data;
    }

So as you can see, we’ll be passing 1 argument to our readText() method, the “file” that needs to be read. We’re then using the Apache Commons IO, FileUtils library to read our .txt file so make sure you have that imported (seems that our “custom” readText() method is not so custom afterall :) )

We’re storing all the read content inside a string and then we’re going to do a split (a fabulous one) by the semicolon (“;”) character since that seems to be separating each individual row.

Now to find out the number of “columns” we can do another split by the comma character (“,”) since it is separating each username by its password. Of course you can use any other character for separation and then do a split by it (i.e. a pipe | or a tilda `  or anything else).

Next step will create an array of array of Objects called “data” and it will have the size of data[rows][columns] that we obtained earlier.

What’s left next is to make 2 for loops in order to iterate through the number of rows and then columns while assigning the read value to the data[][] array without the whitespaces, which will be trimmed.

In the end our method will return the “data” Object[][].

So we’re now going to use this method for the @DataProvider annotation:


1
2
3
4
5
6
7
@DataProvider(name = "text")
    public static Object[][] readFile() throws IOException {
        File file = new File("input.txt");
        ReadText txt = new ReadText();
        Object[][] returnObjArray=txt.readFile(file);
        return returnObjArray;
    }

What’s left to do is to pass this @DataProvider to a @Test method that makes use of its 2 parameters and since we said we’re going to try to login to gmail we’re gonna do just that:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 @Test (dataProvider = "text")
    public void loginGmail(String user, String pass) {
        driver = new FirefoxDriver();

        driver.manage().timeouts().implicitlyWait(7, TimeUnit.SECONDS);
        driver.get("http://www.gmail.com");
        WebElement username = driver.findElement(By.id("Email"));
        username.sendKeys(user);

        WebElement password = driver.findElement(By.id("Passwd"));
        password.sendKeys(pass);

        WebElement loginButton = driver.findElement(By.id("signIn"));
        loginButton.click();

        Assert.assertTrue(driver.findElement(By.cssSelector("a[title*='Inbox']")).isDisplayed());
    }

Notice how the dataProvider name in the @Test annotation corresponds with the name of the @DataProvider and how the test method “loginGmail” accepts 2 String arguments which corresponds with a row of parameters from our input.txt file

In the end our test will be executed 3 times because there are 3 rows of username+password combinations which are passed from file.

Enjoy DDT,

-M.

Page Objects – My friend is not insane

Hello reader,

I am writing this as I am thinking of a work colleague who tries to propose the page objects notion. To get a grip of his effort versus the general acceptance of it imagine Don Quijote versus the windmills.
As in any other post we shall start by understanding the problem at hand.

Baseline:

When we write automated tests, in almost any keyword driven framework, we must implement an action for a selector/locator. This set of key:value will be under a name. This name is specific for an area of a page of the application. If it is a website it will be a webpage, if it is a software program it will be the state of a window. From now on, this will be called a “screen”.

This screen contains multiple key:values, one for each specific action. Please note the “specific” word and think about it. So far we have screens with multiple specific key:values.

On the web, it is very likely that a screen will feature more pages at once. Let’s look at this image:

websiteLayout

From this image it is obvious that most of the screens will have the “Header”,”Categories” and the “Footer” more or less present. Some slight changes will occur for the logged/anonymous areas, except that it is always the same. The “Dynamic content” is the actual driver of the screen. This area is the reason why the users are receiving the info.

Another problem is related to the duplicated code. Every time we write something twice there is a 50% chance that on update we forget about the other piece. Also there is a 100% chance of having to work twice to maintain.

The last problem refers to tests versioning. We like to complain that xxx and yyy changed the locators; however we do little to avoid it. If the project is aiming for a difficult release it is very likely that an older backup of the codebase will be kept. This is why our test should be aware of the version required to be run.

Problem:

How do we keep the functionalities grouped in such a way that we do not have duplicated code and we maintain a versioning system?

My answer:

1) We look around in the project and draw a map, similar to the one I made earlier. Don’t fall into the trap of going too deep with the granularization. I bet that it will take an amount of time unjustifiable to the product manager. This will give us an idea of what is fixed.

2) We create a sketch of the screens and write in that sketch the name of the areas discovered earlier together with their particularities for this step. Those will be our page objects.

3) We add to the sketch the specific actions and validations.

4) In the runner class we receive as many variables as page objects and areas there are. Those variables should start from 1. They represent the version of the scripts.

5) We code each piece of the areas and take into consideration the flags that will cover the states and the version.

6) We code each of the page objects taking into consideration the input data and the version.

7) We go out and drink

This method of automation can be applied under any type of framework and has a very good return of investment. It allows the team to maintain the tests as fast as possible.

Bonus:

It allows the tester, or the PM in case of BDD, to ask questions if the requirements skip areas that used to exist before.

For instance, in the mockups a list can hold 3 products without a scroll bar.  After we look at the tests we can point out the fact that the scroll bar is not present anymore on more than 3 products. This will raise a question that will get clarified into a red scroll bar.

Have fun,
Gabi

 

Behat – External selectors file – Definition of useless or genius

Hello,

Before we start, thank you Mario for listening to my idea and coming up with a better one :).

Last night I have finished implementing a feature into Behat on which I have mixed feelings about. It allows the team to store the selectors in an external file. Now, this sounded great at first and I did not vouch against. Mostly I was curios how it can be done.

I am saying it is a bad idea because sending parameters to the FeatureContext constructor can be done through several different ways:

  • – an array of parameters via the behat.yml that can be extended through import to include a different file
  • – a multi-dimensional array via the Scenario Outline and the Examples table
  • – a normal file include in FeatureContext

However, none of those actually inserts the values into the scenarios at runtime, replacing the keywords. This is when I get to say that it is genius.

But again, this should not be used in the first place. The whole purpose of BDD (in this context) is to be a tool that provides documentation for the stakeholders replacing a tests managements tool. Else we should not have used Gerkin to begin with. But what if the the target is the QA person, if so it makes sense. However, we are testing a framework built on top of Magento that we implement for the clients. Now it gets back to be a bad idea. The clients will not understand jack from our tests. On the other hand, since we share some of the code base but we implement custom functionalities on top of it, we want to maintain or selectors and values in a decoupled spot and not work on the code all the time. But the .feature files are quite decoupled as they are. Uhmm… reasons reasons.

I will let you reader to meditate upon using it or not and if you do use the code, please drop a comment why. Thank you in advance unknown friend.

We will start with a top down overview of the implementation. The xxxx.feature file looks like this:


1
2
3
4
5
6
7
8
9
10
11
  Scenario Outline: Invalid user login
    Given I am on homepage
    And I follow "Log In"
    And I fill in "email" with "<__email__>"
    And I fill in "pass" with "<__password__>"
    And I press "<__button__>"
    Then I should see "<__messageBody__>"

  Examples:
    |  |
    |  |

The current implementation needs to have the empty table at the end in order for Behat to generate an array at runtime. Probably this can be fixed in the code. The <> are regular placeholders. They are substitute with the values from the Examples table at runtime. The “__” (double underscore) are used in order to ensure some kind of differentiation between our keys and the table already existing.

In FeatureContext.php I have created this method that will be loaded on the @BeforeFeature hook:


1
2
3
4
5
6
7
8
9
    /**
     * @BeforeFeature
     */

    public static function prepare(\Behat\Behat\Event\FeatureEvent $event)
    {
      $feature = $event->getFeature();
      $exampleLoader = new ExamplesLoader();
      $exampleLoader->replaceExamples($feature);
    }

The class for this is a file called ExamplesLoader.php, located in the Bootstrap folder. This does not have to be loaded or anything because Behat automatically loads all the classes from that folder. Since the logic is in here I will post it based on functionality.

This will iterate through our scenarios and for each scenario will get the examples. In will return an array with the number of elements equal to the number of rows in the Examples table. In the current implementation it works with two.


1
2
3
4
foreach ($feature->getScenarios() as $scenario) {
            $examples = $scenario->getExamples();
// all the other pieces of code will go in here. Leave it blank.
}

This piece will glue together the current working directory ( getcwd() and the name of the file where the selectors/locators exist ). They will be glued by the DIRECTORY_SEPARATOR so it works on every operating system. Please note that the working directory is where behat.yml is located, not where the current file exists. This string will exist in the “$filePath” variable.
The “$holder” will contain a bi-dimensional read from the “tsv” file just read. If you want to read a file with a different separator please read the PHP documentation for fgetcsv(). The second argument is the separator. Also, if the examples table is longer an iteration inside the $holder[$row] is required because we want to have data for all the rows, not just two.


1
2
3
4
5
6
7
8
9
10
11
12
$filePath = join(DIRECTORY_SEPARATOR, array(getcwd(), 'locatorsFile.tsv'));
$holder = array();
$row=0;
if (($handle = fopen($filePath, "r")) !== FALSE) {
    while (($data = fgetcsv($handle, 1000, "\t")) !== FALSE) {
        //if you are thinking that it would be better to iterate over many elements,
        //don't later you will use only key:value
        $holder[$row]=array($data[0],$data[1]);
        $row++;
    }
    fclose($handle);
}

The $rows is a variable created by Behat which stores all the values of the Examples table. Each element of this array is an array of what is inside between 2x| (pipe) on that specific row in examples. Basically here is where we want to add our keys and values. Because after we are inserting them, the framework will handle all the logic that there is to come. The setRows($rows) is a method that locks in place this table for tests creation.


1
2
3
4
5
6
7
// Add our global examples
foreach($holder as $value){
    $rows[0][] = $value[0];
    $rows[1][] = $value[1];
}
//and we send the data to the examples table
$examples->setRows($rows);

Now our table will include all the data from locatorsFile.tsv. Here’s how that file looks like on the inside:


1
2
3
4
__button__  Send
__messageBody__ Invalid login or password.
__email__   asdkjasdj@askdjaskjd.com
__password__    asdadasd

Have a nice day,
Bye bye!