A blog on Software Testing, Quality Engineering, Tools, Conferences

Month: July 2016

CAST 2016 in Vancouver

Yeah, I’m so exited about attending the software testing conference CAST 2016 in Vancouver. This is why:

  • AST’s program committee has put together promising keynotes and talks with lots of interesting titles like “Embedded Testers Aren’t Undercover Cops” or “Shifting the Testing Role Pendulum“. Let’s see what’s that all about.
  • The conference is not only about cool topics, but the organizers set special focus on discussing the contents and they foster communication among attendees.
  • I registered for a half day tutorial by James Bach on “What Catalyzes Testing? Testability!” As I already watched and liked so many videos of him talking about software testing (e.g. this, this and this), I’m exited to see him in live action.
  • Vancouver is a beautiful city! Last time I’ve been there I discovered many wonderful places in the city by bike. This time I plan to enjoy the surroundings on a motorbike.

For those of you who are not that lucky to be able to attend I recommend the live stream and video recordings. You will most probably find me writing a conference recap post in a few weeks.

Gherkin in Confluence: Test Documentation for BDD

Even if you are – like me –  not a big fan of creating huge and detailed test plans upfront, there will be a time when somebody from outside your project will ask you about what you are testing at all. This happened to me in various “Testing Reviews” before. If you are using Behavior Driven Development (BDD) with the Gherkin language – and I recommend you to do so! – then you can point that person to your test code repository with all the *.feature files for inspection. These files contain all the user stories you test in a simple and human-readable format.

I admit that repository links to a set of feature files might not always be the best way to present your test cases. Of course, depending on the person you want to present it to, you might want to see the different scenarios on a higher level. That is why there are multiple Gherkin parsers for different programming languages available. You could use any of these to read your Gherkin files and then produce a nice test documentation from it.

I worked in a project where people were very used to see test plans and test documentation in Confluence, a corporate wiki and collaboration tool. That is why I decided to somehow import the feature files from the test code repository, extract the necessary information and print it on a wiki page. The goal was a simple and fast solution and that’s why I shied away of developing an own Confluence plugin for that. Instead, I wanted to use one of the available Script Macros that you can use with Confluence. With these macros you can include scripts in Groovy, Gant, Jython, BeanShell and server-based JavaScript (Rhino) into your wiki page. These scripts are then executed on the Confluence server before delivering the page. Unfortunately for this reason, you cannot add your own script libraries – for example a Gherkin parser for JavaScript – to the server-side execution engine. Groovy fitted best to my needs as it provided easy means to make HTTP requests and parse JSON. Finally, I came up with that script:

def stashBaseURL = ""
def stashREST    = "/rest/api/1.0"
def stashProject = "/projects/RTCGW/repos/rtcgw-automated-testing/browse/features/" 

def responseText = new URL(stashBaseURL + stashREST + stashProject).getText()
def json = new groovy.json.JsonSlurper()
def responseObject = json.parseText(responseText)

def i = 0
print "<ul>"
responseObject.children.values.each { file ->	
	if (file.path.extension=="feature") {
		def featureDescription = "Feature description not found"
		def scenarioList = []
		responseText = new URL(stashBaseURL + stashREST + stashProject +
		fileContent = json.parseText(responseText)
		fileContent.lines.each { line ->
			if (line.text.trim().startsWith("Feature:")) {
				featureDescription = line.text
			} else if (line.text.trim().startsWith("Scenario:")) {
				scenarioList << line.text
		def scenarioListId = "cucumberScenarios-" + i	
		print "<li><a onclick='\$(\"#"+scenarioListId+"\").slideToggle();'>"+ featureDescription +"</a> (see <a href='""' target='_blank'>" + + "</a> for details)</li>"		
		print "<ul id='"+scenarioListId+"' style='display:none'>"
		scenarioList.each { scenario -> print "<li>" + scenario + "</li>" }
		print "</ul>"
print "</ul>"

The result in Confluence looks like that:


Of course, there are multiple things to improve:

  • somehow evaluate or show tags from features and scenarios – so you can for example see which test cases are disabled
  • make the resulting wiki page look better
  • support scenario outlines and backgrounds
  • Page delivery times increase because of server-side script executions.

For my project this simple script helped me to get rid of manual synchronization efforts between test code and test documentation. Page delivery is a little sluggish for about 15 feature files each containing about five scenarios, but people are used to slow Atlassian products. 😉

If you want follow a similar approach, please let know. Maybe, this sample script can serve you as a basis.

© 2024 BlueTopTesting

Theme by Anders NorenUp ↑