A unique feature of providing a service is the disparity between the implementation / scope of offerings and the perceived / expected suitability. Translation and localization is no different – being a blend of management, linguistics, software engineering, research and of course translation, it is not always trivial to understand the inner workings. Customers may as well ask: What does a service encompass and why? How does cost relate to value? What pieces of information may help create the best fitting output? Why not rely on automation in every step of the way? What is the breakdown of the production timeline? Which management path to take or file format to use? And why are these questions important at all? On the other, the provider’s side of the coin, customer priorities and goals are not always clear.
What is our theory that we have, that is to say, which is ours, is ours? You may well ask what our theory is. This theory goes as follows, and begins now. Localization testing is thin at one end, much, much thicker in the middle, and then thin again at the far end.
Blue pill, red pill
Software testing is hard manual labour – there are many opportunities for streamlining and automation, but some aspects of the work require hours of human input, and not even of the most thrilling nature. Basically there are two avenues one can take, depending on the intended quality level.
Power to the people
No matter how we frill up the case, manual testing is usually a whopping drag – unless you turn it into a game. Gamifying the process not only dispells the looming threat of boredom, but also helps distribute the workload. Crowdsourcing screenshooting and dialog testing is not only much cheaper, but clusters of people can chew through considerably more work in one sitting. Usually a crowdsourced method can include achievements for the snappiest, most productive people, or for those who produce the least mistakes. Unfortunately the ratio of fun and quality is constant: if you wish the result would look a spectacle and a half, crowdsourcing may not be your best option.
The other option is to assign the work to a select few professionals, whose performance is guaranteed. While this approach deflates the testing experience, it represents the high-water mark of quality today. In the not so distant future, algorithmic analysis and heuristic identification of localization issues by software may provide just as useful results as static code analysis.
In the current post, let’s see how espell plays chamber music through a case study.
After zooming through terminology lifecycle management in the past few weeks, let’s continue with off-the-wall Beatles references and a case study.