What is our theory that we have, that is to say, which is ours, is ours? You may well ask what our theory is. This theory goes as follows, and begins now. Localization testing is thin at one end, much, much thicker in the middle, and then thin again at the far end.
Blue pill, red pill
Software testing is hard manual labour – there are many opportunities for streamlining and automation, but some aspects of the work require hours of human input, and not even of the most thrilling nature. Basically there are two avenues one can take, depending on the intended quality level.
Power to the people
No matter how we frill up the case, manual testing is usually a whopping drag – unless you turn it into a game. Gamifying the process not only dispells the looming threat of boredom, but also helps distribute the workload. Crowdsourcing screenshooting and dialog testing is not only much cheaper, but clusters of people can chew through considerably more work in one sitting. Usually a crowdsourced method can include achievements for the snappiest, most productive people, or for those who produce the least mistakes. Unfortunately the ratio of fun and quality is constant: if you wish the result would look a spectacle and a half, crowdsourcing may not be your best option.
The other option is to assign the work to a select few professionals, whose performance is guaranteed. While this approach deflates the testing experience, it represents the high-water mark of quality today. In the not so distant future, algorithmic analysis and heuristic identification of localization issues by software may provide just as useful results as static code analysis.
In the current post, let’s see how espell plays chamber music through a case study.