A man who needs no introduction, Renato Beninatto writes in his January post about the zeitgeist of the industry, as he does every year. As Renato puts it, 2013 might be a year for evolution, rather than revolution, when things are settling down. While we are standing in anticipation for the next big thing, let’s put on our mythbusting cap, and look into the tropes of today’s localization world. In the upcoming series of posts, we check on the evolutionary state of the most prevalent, already mature concepts. And when the revolution comes, we hope they won’t be the first against the wall either. Continue reading
After having set off on various tangents about terminology lifecycle management that caters to a truly niche but savvy audience, let’s keep it lean and tight today with a case study about how a unified localization pipeline:
- Reduced validation overhead
- Ensured consistency between cross-dependent translations running in parallel
- Facilitated collaboration
- Resulted in quicker turnaround times and 15% growth in the translation output
- Streamlined maintenance processes and thus reduced individual project costs
- Cleared up roles and responsibilities
What is our theory that we have, that is to say, which is ours, is ours? You may well ask what our theory is. This theory goes as follows, and begins now. Localization testing is thin at one end, much, much thicker in the middle, and then thin again at the far end.
Blue pill, red pill
Software testing is hard manual labour – there are many opportunities for streamlining and automation, but some aspects of the work require hours of human input, and not even of the most thrilling nature. Basically there are two avenues one can take, depending on the intended quality level.
Power to the people
No matter how we frill up the case, manual testing is usually a whopping drag – unless you turn it into a game. Gamifying the process not only dispells the looming threat of boredom, but also helps distribute the workload. Crowdsourcing screenshooting and dialog testing is not only much cheaper, but clusters of people can chew through considerably more work in one sitting. Usually a crowdsourced method can include achievements for the snappiest, most productive people, or for those who produce the least mistakes. Unfortunately the ratio of fun and quality is constant: if you wish the result would look a spectacle and a half, crowdsourcing may not be your best option.
The other option is to assign the work to a select few professionals, whose performance is guaranteed. While this approach deflates the testing experience, it represents the high-water mark of quality today. In the not so distant future, algorithmic analysis and heuristic identification of localization issues by software may provide just as useful results as static code analysis.
In the current post, let’s see how espell plays chamber music through a case study.
After zooming through terminology lifecycle management in the past few weeks, let’s continue with off-the-wall Beatles references and a case study.