Get On My Back for a Piggyback Ride

What is our theory that we have, that is to say, which is ours, is ours? You may well ask what our theory is. This theory goes as follows, and begins now. Localization testing is thin at one end, much, much thicker in the middle, and then thin again at the far end.

Blue pill, red pill

Software testing is hard manual labour – there are many opportunities for streamlining and automation, but some aspects of the work require hours of human input, and not even of the most thrilling nature. Basically there are two avenues one can take, depending on the intended quality level.

Power to the people

No matter how we frill up the case, manual testing is usually a whopping drag – unless you turn it into a game. Gamifying the process not only dispells the looming threat of boredom, but also helps distribute the workload. Crowdsourcing screenshooting and dialog testing is not only much cheaper, but clusters of people can chew through considerably more work in one sitting. Usually a crowdsourced method can include achievements for the snappiest, most productive people, or for those who produce the least mistakes. Unfortunately the ratio of fun and quality is constant: if you wish the result would look a spectacle and a half, crowdsourcing may not be your best option.

Chamber music

The other option is to assign the work to a select few professionals, whose performance is guaranteed. While this approach deflates the testing experience, it represents the high-water mark of quality today. In the not so distant future, algorithmic analysis and heuristic identification of localization issues by software may provide just as useful results as static code analysis.

In the current post, let’s see how espell plays chamber music through a case study.

Shape of a Brontosaurus

Bronto(Disclaimer: This is not an official sighting of a brontosaurus.)

According to our theory, software testing shapes up like many things: hat, elephant or even a dinosaur. Basically, the curve of the process can be divided into three stages:

  • Preparation and setting up a testing environment
  • Testing that constitutes the bulk of the work
  • Post-testing corrections and implementation

As you will read in the following study, these segments can be optimized heavily by

  • Finding pause points in the project and run independent threads in parallel
  • Creating an framework that facilitates automation


Our client is a security software developer headquartered in Canada, who was looking to introduce their flagship product in Spanish-speaking South-America and francophone Canada. Linguistic and localization quality was essential to achieve, so we collaborated with the customer on designing a flexible testing framework that accommodated their development cycles. Since 3rd party solutions were found wanting in features, flexibility and collaborative features, we developed an online portal in-house, specifically tailored for the needs of the customer. The investment returned during the course of the first project by sparing considerable management and testing time, and the solution contributed to the success of three other major software localization projects since then.


In 2009, the customer developed a software suite that was intended to be their flagship product, but had limited experience in application testing and localization. As the company’s investment into the product was significant, maladroit translations, context issues and localization errors had to be ironed out completely.


It was very clear from the beginning that in order to achieve the best results, we had to devise a structure that laid the groundwork to integrating the different aspects of the complex localization projects. It was crucial to identify issues and resolve dependencies early on, and manage translation, localization, validation, in-context review, software development and testing in a single process.


Screenshooting the localized versions of the software was an obvious requirement, but having had the source language shots available was very useful for the translators to contextualize non-linear user interface strings. To facilitate the process, we worked together with the client to implement a feature in the software to simulate all usage scenarios without resorting to manually invoking them.

Moreover, we built an online portal that ran on our own servers. The interface relied on the Sinatra micro web framework that we tailored by the extensive use of JavaScript to feature automatic saving, various status flags and permissions. This allowed us to automatically assign screenshots to test cases by a simple upload, and make them available for review instantaneously.

Before launching the translation, we created shots of the original English screens. Embedding links to the images in the files enabled the translators to check them in the environment on-the-fly.The translation was performed online, which also presented the client with the option to validate our translations without any delay, with all the context information available.

While the client was working on compiling a localized build of the software using the signed-off translations, we put together a termbase of the UI strings to ensure consistency between the UI and documentation. To further parallelize the project and spare time, functional testing and documentation translation was done alongside as they did not share dependencies.

Testing the localized application was done in four different environments on virtual machines. Because the process was exactly the same in all cases, we lived with the opportunity to record a single instance and replay the actions on the other three. Screenshots were being uploaded to the portal as the testing went along, which allowed linguists, testers and those who created the screenshots to work simultaneously.


Following through the localization and testing of the application allowed us to implement a workflow that eliminated dependencies while parallelized the project as much as possible at the same time. It was important for the client to bring the release date as close as possible without hindering quality, and since neither of the work stages locked any other from progressing, we managed to shave off 8 working days from the turnaround time.
The testing portal, automation of screenshooting and having an online translation ecosystem accelerated the process, and hey presto! User interface elements were translated in context, validation and feedback cycles were streamlined and consistency was achieved between software and documentation.

If you’re interested in more, see our website at

See you in two weeks!

Leave a Reply