Elsevier: Saving $500,000 with TNQTech’s Language Central

Shanthi Krishnamoorthy

Language Central is TNQTech’s AI-assisted product that assesses research manuscripts for language quality. This case study details how the global STM publisher Elsevier has used Language Central in their workflow to improve quality, while potentially saving over $500,000 annually.

For STM publishers, exceptional language quality is non-negotiable. The language editing process therefore needs to be robust and continually improving, while finding ways to boost efficiency gains. Language Central assigns a language score and an editing level to manuscripts, so publishers can route them to the most appropriate editing teams or suppliers.

"Elsevier was looking for a solution to improve the quality of copy editing by matching the manuscript to a copy editor with the best skills. Language Central was piloted for 6 months and showed tremendous potential where its ML models accurately predicted the quality of language and therefore the level of intervention needed. This has not only improved the quality of our products but has also led to greater efficiency in our production process."

Built on a convolutional neural network, Language Central leverages deep learning models and linguistically informed rule-based systems. It evaluates content based on sentence structure, parts-of-speech components, text sequences, spellings, and word similarity patterns on a sentence level, aggregating it all to the journal article or book chapter. Language Central has been 3 years in the making, built from our 25 years of copy editing experience and in partnership with Enagoʼs Trinka, which was used as a part of Language Centralʼs core.

Business Problem

Before Language Central, Elsevier would assign a copy editing level to all articles in a journal, when in reality the language quality could vary greatly from article to article within every journal. Assessing every article in detail, however, takes significant manual effort.


TNQTech’s product team worked with Elsevier to identify areas where we could largely automate language quality assessment, speed up turnaround time, and reduce costs. The ultimate goal was to improve the overall quality of these assessments, while preserving author satisfaction. Since November 2021, across 10,000 articles and 100,000 pages (ramping up to 20,000 articles in 2023), Language Central has helped:

  • Move from a journal-level to an article-level workflow
  • Assign native English-speaking copy editors to articles needing “high” level editing
  • Improve turnaround time by breaking the process down by article
  • Reduce costs by eliminating redundant copy editing efforts

Key Changes

Before Smart Language Since Smart Language
Manual assessment of manuscripts for language quality ML-based language profiler assesses for language quality
Copy editing service levels defined for entire journal Copy editing service levels defined by article
Fixed turnaround time and cost Faster turnaround time and reduced cost

Language Central is part of TNQTech’s suite of AI-enabled technology tools for scholarly publishing.