Ode to Terminology Lifecycle Management – A Case Study

After zooming through terminology lifecycle management in the past few weeks, let’s continue with off-the-wall Beatles references and a case study.

Kite Poster

Norwegian wood

I’ll Cry Instead

Since 2009, we have been localizing content into 11 languages for a Norwegian company providing high-end video-conferencing software and hardware. Terminology management was in focus since the beginning, and by mid-2010, the corpus size and variety of content types raised the need for consolidated terminology databases, maintenance- and 3rd party review processes. By the end of the year, the unification of the pipeline improved turnaround times significantly and resulted in more accessible and transparent maintenance procedures as well as successfully reduced validation overhead and overall project costs.

Fixing a Hole

In the first year of our collaboration, the proliferation of methods and content types spawned diverse maintenance and terminology development processes. Before the consolidation, we maintained context-aware translation memories and created specific terminology sets for each product. As the variety of products grew, cross-product consistency and leveraging became more and more important for the client. Therefore, we set up an integrated localization pipeline in order to streamline production from terminology compilation through translation, software testing to 3rd party validation. It was an important consideration to achieve the transition in the background during everyday production, without disrupting the processes in place at the time and inconveniencing the client. Since it was common for multiple projects to run in parallel with shared dependencies, the new workflow had to take into account the synchronization of resource pools as well.

Revolution 9

Translation Memories

Restructuring terminology required a pure and rich bilingual corpus with meta-data we could capitalize on. Thus the first step in the process was collecting meta-data, identifying cross-product common elements and contextualizing homonyms.

In this phase a large repository was compiled with unified structure, with emphasis on meta-data. All entries shared the same information from project type, product line, versions, descriptors, approval status to contextual information. Moreover, the pre-translation engineering process was overhauled so that the generated IDs reflect more information and always unique across all content, regardless of the source format of gettext, .ts or xliff.

Terminology Cycles

Similarly to the TMs, terminology databases were initially designed with each product line in mind. As the product portfolio and product generations became more and more diverse, the need for a flexible, but self-consistent workflow arose, which would not have been possible to achieve with fragmented terminology concepts.

By revamping the structure and management of terminology, we aimed at:

  • Consolidating all terminology sets and generate meta-data across products;
  • Identifying terminology overlaps and diversify similar concepts by usage and product;
  • Unifying management and maintenance processes;
  • Integrating terminology review and approval phases with version control and streamline 3rd party review
  • Distinguishing generic domain terminology from product-specific

Similarly to the TM work, keeping this undertaking behind the curtains allowed us to operate business as usual on live localization projects. As opposed to other aspects of the localization pipeline, featuring definable pause points and dependencies, such an undertaking best suited to be kept in one hand in order to reduce the risk of information loss, miscommunication and suboptimal implementation. Therefore, a localization lead was assigned to manage language teams, terminology conceptualization, assessment of requirements and setting up the architecture instead of delegating each role to different parties.

In 2010, Kilgray’s qTerm was in its infancy, and sorely lacking in features. Nonetheless, we needed a frontend as well as an authoring and management tool. Since other contenders were either inferior in flexibility, usability and interoperability, or did not sit well with our ecosystem built around memoQ, we had to resort to internal development. Although it could have been considered a setback and a bump in the process, the tool has been proven its usefulness many times over, of which you can read more about in this post.

Review and validation cycles

In an ideal world, multi-language validation is independent, in-context, interactive and interconnected. Before the overhaul, validation cycles were also afflicted by fragmentedness. We strived to reimagine the complete production chain, and in terms of validation, the bottleneck issue was solved by collaborating with the client on taking over the responsibility of managing the reviews, and accommodating validators with a frontend that not only provided context information, but also on-the-fly updating and version control.

She Said She Said

When it comes to processes, usually one can take two avenues, either creating a very strict, robust, but inflexible space for collaboration, or constructs a lightweight, decentralized shell built on the premise of trust. The endeavour of rebuilding the process from the ground up was partly fuelled by our being partial to the latter. Designed flexibility can be a tangible asset: after the framework and new project pipeline was in place, there was no need any more for going over hoops to solve complex problems. The back-and-forth with version differences, redundancies and synchronization faded away, validation time shrunk, which ultimately resulted in shorter time-to-market time for the client. Moreover, the more reliable and cross-product leveraging of existing translations and a common repository for terminology directly reduced overall localization cost.

Cold Turkey

We will go cold turkey next week and leave this topic behind in our upcoming posts, stay tuned for something just a bit different!

 

Leave a Reply