By Debra Johnson, President
Orlando Central Florida STC Chapter
( I wanted to give a special nod to the original author of an article on the Internet I used as reference, however,I lost the original source page… so forgive me… I have copied some, paraphrased and embellished)
Technical Communication KPIs
A Key Performance Indicators (KPI) should ultimately save a company money, or make a company money. Choosing the right KPIs relies upon a good understanding of what is important to your organization.
Useful Content/Documentation can save money by reducing support and training costs. It can help with better decision-making, or it can earn the company money by increasing sales (in my case it would most likely be the first two). When it comes to Documentation, saving and making money for a company is a bonus rather than the main point though… For me, it’s usually a part of the delivery of the software product…meant to educate/inform users.
I have been asked to come up with KPIs as the Practice Lead for the Technical Communication here at Wyndham Vacation Ownership.
So, on its own, content/documentation might not meet the standard criteria for a “good” return on investment that a KPI might normally measure. A more appropriate KPI might be around ensuring that the documentation process is as efficient as it can be, and is not a bottleneck within the development and delivery process. This then becomes part of the larger operational efficiency KPI.
On the other hand, quality, isn’t necessarily about the classic measurements of consistency, accuracy, and so on, but is instead about whether the content actually meets customers’ needs. Fully implementing a feedback system here at WVO on our content would partially enable us to measure that. Soliciting feedback on an ongoing basis during project duration isn’t “measurable”, in the sense of surveys. It usually means if someone has concerns, they come to me, as a lead, so I can address them before it impacts publishing and the project.
Surveying our internal customers (the documentation requestors, PMs, SMEs, Project leads for example) with a limited number of questions about our process, and our performance regarding timeliness, accuracy, and meeting user needs is our normal determination of performance. Doing this once per year per individual, means that the internal customers aren’t getting multiple surveys. It also gets a better response rate.
Here at WVO, we are at the final stages of setting up a TechComm request process that uses the same tracking solution as our enhancement or infrastructure requests, etc. The need for documentation is entered as a request. Requests are then routed through a queue process that notifies the requestor of any status changes (active, in development, in review, published, etc.) and provides a link to view the request. As we add writers, we can assign the queues to specific team members. Eventually, we can create a customer satisfaction survey that auto generates whenever a request is closed. We’ll be able to generate reports to see if something is left “in review” for a lengthy period of time, so we can identify the bottlenecks. We’ll be able to track turnaround time and set goals accordingly. We’re just at the beginning stages, and I’m sure we’ll have to work out some issues in the workflow, routing/notification, etc.
Some KPIs in TechComm can be deceptive. To pick an obvious example, measuring grammatical and spelling errors per page is comparatively easy and will probably help to reduce that KPI number. But one very fast way to improve this KPI is by changing the page layout, so there’s less text per page. Fewer words and more pages lead to fewer mistakes per page – without correcting a single word. In addition, the measure won’t improve documentation that’s out of date or incomplete or incomprehensible.
What’s right as a KPI for one team is completely wrong for another. Measuring errors on the page is only a valuable KPI if the number of errors on a page relates closely to the purpose of your documentation.
If there is a close relationship, then it’s a useful KPI.
So what would be better KPIs? It depends on your particular TechComm strategies….
If your strategy is to make customer support more cost-effective, we can measure (expensive) support calls against (cheaper, self-service) documentation traffic, while trying to align our documentation topics, so they can effectively answer support questions.
If your strategy is to improve customer retention, you can measure users’ search terms for documentation, number of clicks and visit time per page, while trying to optimize content for findability and relevance to users’ search terms. You would have to more fully implement Google Analytics or another Analytics program.
If your strategy is to improve content reuse and topic maintenance, you could measure redundant content to drive down the number of topics that have mixed topic-type content: However, as long as you still have abundant conceptual information in task topics, you probably will have redundant content. If you move to window and field help reference information in task or concept topics, you would most definitely have redundant content.
Some KPIs, I want to consider would be
- TechComm Operational Effectiveness:
- Cycle time (dependent on whether the documentation owner chooses to sacrifice quality for speed to market)…meeting deadlines
- Feedback (routine surveys after projects)
- Accuracy and effectiveness gathered through:
- User Feedback submitted to the Technical Publication mailbox from internal partners
- Revisions requested/required outside of a release cycle
- Changes in customer support contact rate (when content can be correlated with specific topics)
What do you think???
See you at Washlines “XX”