Earlier we elaborated a bit on various aspects of terminology governance and conceptualization as a method of constructing a self-maintained ecosystem. And now that we have all angles covered, there’s no reason to labour the point, let’s say thank you and good night.
Fortunately for all of us with an inkling to solve problems, once the terminology processes are kicked off and localization is in full swing, issues of different nature are bound to crop up.
Our runner-up nominees are:
- Scope creep
- Process bloat
- Tools getting obsolete
With a twist in the story our winner is the last of the fab four, the most prominent case: update and change control.
Updates & Change Control
Decidedly, flexible permission management and architecture can keep regular run-of-the-mill updates in check. Larger scale modifications to meta-data and terminology content, approval chains, instantaneous commits of delta content and, to some extent, even structural changes should be done in implementation so that maintenance is either minimal or automatic. There are certain scenarios which usually outgrow the scope of expected maintenance work. Some issues are considered to be sporadic enough and the effort to implement a workaround simply outweighs that of the occasional maintenance. Others are sneaky, and cannot be prepared for during the initial stage of setting up the architecture. Such challenges usually fall into the following categories:
- When structural changes necessitate the assessment of existing meta-data and require that meta-data are assigned to the new properties
- When a large chunk of data with a different structural setup need to be imported
- In cases of integrated arrangements, when certain changes need to be propagated throughout the entire ecosystem
We have all been there – even if processes are well-defined and agreed upon, external requirements can stretch the scope of work more often than one would desire. Mild scope creep can be categorized as a challenge to be solved as change control; however, an acute case may threaten to spin processes out of hand. A perfect storm of little issues may be an indicator that something has already gone awry.
We should have mentioned this aspect earlier, because many of us have fallen prey to the fencepost mistake of falling in love with our very ideas and losing focus, resulting in bloated and ineffective processes. It’s nice to create a jack-of-all-trades tool and process chain, but there may only be a niche benefit to it. Process bloat can occur early in the development stage, but it’s more likely that it surfaces with suboptimal solutions to problems that scope creep generates, only to be left with undocumented, roundabout and bastardized processes.
Tools getting obsolete
Spring, for some, does not bring the beginning of a new year, but the death of the old one (for more such heart-warming insights, Lawrence Durrell may be of help). On a less pessimistic note, the proliferation of tools brought about very apt solutions to many localization and terminology-related problems, continuously eroding already deployed systems. If investments have already been made, it’s not always worth swapping tools, even if the promise of a more feature-packed solution is more appealing. There is a silver lining to the dilemma too. First of all, only invest in modular solutions so that the processes and tools can be improved piece by piece without pulling the plug, and secondly, ask your solution partner for the features that you direly need.
Regardless of the nature of the predicament, usually the solution boils down to how the process backend can hold up with the new requirements, and how well processes are documented and followed. For long, we have been harping on proper interoperability, which we investigated in this post too, as a precursor to having a robust framework. In our experience, it is best to have an off-line, accessible database that can be easily manipulated and not constrained by the features of the chosen front-end. This can not only serve as a reliable back-up, but also as a safe playground so that features and additions can be tested in parallel. When it comes to integrated solutions relying on proprietary tool chains, incorporating terminology, authoring, administration or other layers, having a central repository is not only a fall-back option, but also a necessity.
Last year, we put a strong foot forward to revamp our documentation process, and moved from a wiki-type structure to a more hierarchic one. In terminology lifecycle management, having a reliable repository can not only prevent the issues outlined above, but also improve transparency. Moreover, our advice is to define criteria not only for evaluating how well the work on processes and structures progress, but also for identifying threats. Problems that would otherwise be considered a minor hiccup can raise the flag in time, and successfully prevent flushing existing processes and rebuilding from the rubble at a later stage.
Terminology management is mainly not about terms and databases. When done well, it also entails enterprise information quality management, promotes brand development as well as technical accuracy.
Blast from the past
The last piece on terminology will come next week with a case study, and then we’ll shelve terminology for a time, just before it overstays its welcome!