How Word Error Rate (WER) Works (Calculations and Improvements) See how Word Error Rate is calculated and how you can improve the accuracy of WER while using your text-to-speech tools.

By Hammad Syed in TTS

May 13, 2024 11 min read
How Word Error Rate (WER) Works (Calculations and Improvements)

Generate AI Voices, Indistinguishable from Humans

Get started for free
Clone a Voice

Table of Contents

Word Error Rate (WER) is the benchmark that tells us how well speech recognition technologies translate spoken words into written text. 

This critical metric helps fine-tune the systems that power everything from smartphone voice commands to customer service chatbots. 

I’ll walk you through how WER is calculated, why it matters, and the challenges it presents in the field of automatic speech recognition.

What is Word Error Rate?

Word Error Rate (WER) serves as a performance scorecard for any automatic speech recognition (ASR) system, something I’ve found to be incredibly valuable in my experience with these technologies. 

This metric is essential because it measures how accurately a system can convert spoken language into written text using speech-to-text technology. 

In practical terms, WER evaluates how well an ASR system, like those developed by Microsoft or other major providers, performs by comparing the text it produces to the actual words spoken. 

It specifically looks at three types of errors: substitutions, deletions, and insertions. 

A substitution happens when the ASR system swaps one word for another; a deletion occurs when it leaves out a word; and an insertion means adding a word that wasn’t in the original spoken content. 

The primary goal in refining these systems is to achieve a lower WER. A lower WER means the ASR makes fewer mistakes, thus more accurately capturing and reflecting the spoken English. 

This accuracy helps maintain the integrity and intent of the original message.

How WER is Calculated

Calculating Word Error Rate (WER) is about more than just spotting mistakes—it’s about measuring them in a way that helps us improve how speech recognition systems work.

Here’s how it’s done: you start by identifying and counting the errors the speech recognition system makes. These errors are classified into three types: substitutions, deletions, and insertions. 

In substitutions, the system replaces a correct word with an incorrect one. Deletions occur when the system leaves out a word that should have been included, and insertions happen when the system adds an extra word that was not spoken.

After identifying these errors, you add them all together. This total is then divided by the number of words that were originally supposed to be said. 

For example, if I say “I love sunny days” into a system, and it transcribes it as “I love money days,” we have a substitution error where “sunny” is replaced by “money.” If we count this one mistake against the four words I spoke, we end up with a WER of 25%. 

This measurement is especially crucial in real-time applications like live transcriptions or systems that respond to voice commands, where it’s important to quickly identify and fix errors. 

Despite its simple appearance, the WER formula is a powerful tool that helps improve speech recognition technologies, making them better at understanding and processing our speech.

Factors Influencing WER

When I think about what affects the accuracy of a speech recognition system, several factors come to mind. Each plays a crucial role in how well these systems perform, and understanding them can help us see why sometimes they get things just a bit off.

Background Noise

Background noise is a major factor. For accurate ASR (Automatic Speech Recognition) transcripts, high-quality audio is essential, but this ideal is often hard to achieve in everyday situations. 

If you’ve ever tried using voice commands on your phone on a busy street, you’ll know the frustration. The system has a hard time picking out your voice from the noise around you—like cars honking, people chatting, and the wind howling. 

These sounds cause more mistakes and increase the Word Error Rate (WER). Although manufacturers are constantly improving their ASR models to filter out this noise and improve audio clarity, it remains a significant challenge.


The variety of human speech is wonderful but presents a real challenge for ASR systems. In my experience working with language models, one of the toughest tasks has been teaching them to understand different accents. 

Each accent can make the same word sound almost completely different. ASR systems that haven’t been trained with diverse datasets may find it hard to handle accents they’re unfamiliar with, which often results in a higher WER. 

Training these systems with a more diverse range of voices can help them adapt better.

Speech Speed

Fast talkers can confuse even the most advanced ASR systems. When people speak quickly, their words tend to blend together, making it difficult for the system to tell where one word ends and another begins. 

This can cause the system to miss words (deletions) or add words that weren’t spoken (insertions), complicating the process of figuring out how many changes are needed to correct the transcription errors, a process known as calculating the Levenshtein distance. 

Generally, slower speech leads to more accurate transcriptions, but there’s a push to improve ASR technology so it can handle a range of speech speeds more effectively.

Technical Jargon and Proper Nouns

Specialized vocabulary and proper nouns also pose unique challenges. For instance, in medical settings, it’s crucial that ASR transcripts accurately capture the names of medications or medical conditions. 

Misunderstanding these terms can dangerously increase the WER. Continually updating and training ASR systems with specialized vocabularies is vital to reduce errors in these critical areas.

WER in Different Applications

Automatic Speech Recognition (ASR) technology is used in many areas, each requiring different levels of accuracy depending on how critical errors can be.


In healthcare, high accuracy is crucial because mistakes can have serious consequences. Medical professionals use ASR to write down everything from conversations with patients to notes during surgery. 

If these transcriptions aren’t accurate, it could affect patient care and how medical records are kept. 

For example, if a symptom or treatment plan is wrongly recorded because of poor audio quality or a mistake in the ASR system, the wrong medical care might be given. That’s why there’s a constant effort to make ASR in healthcare as perfect as possible. 

This effort includes making improvements in understanding natural language and translating medical terms accurately.

Customer Service

In customer service, ASR systems help handle common questions, which reduces the amount of work for human staff. The need for precision here isn’t as high as in healthcare, but it’s still important for keeping customers happy. 

If an ASR system often gets things wrong because of high WER, it can frustrate customers and lead to a bad service experience. However, because the mistakes here are less serious, there’s a bit more room for error.

E-commerce and Retail

In fields like e-commerce and retail, companies like Amazon use ASR to help customers shop and manage smart home devices. 

If the ASR system frequently makes mistakes due to high WER and doesn’t understand language well, it could result in wrong orders or unmet customer needs. 

This affects how efficient the company appears and how satisfied customers feel about their shopping experience.

Improving WER in Speech Recognition Systems

Developers continually strive to lower the Word Error Rate (WER) in automatic speech recognition (ASR) systems, focusing keenly on enhancing the sophistication of these technologies. 

Advancements in machine learning, particularly through the development of robust neural networks, have significantly improved the transcription accuracy of these systems. 

For example, by incorporating large and diverse test sets into their training processes, developers enable ASR systems to better understand a vast array of speech patterns and nuances. 

This variety in the dataset helps in normalization of the speech data, making the system efficient in real-world applications.

Moreover, the integration of APIs that connect these ASR systems to various applications allows for the gathering of extensive user feedback in real time. 

This feedback is crucial for transcriptionists and developers to refine algorithms continuously, focusing on specific areas of speech such as spellings and the correct interpretation at the word level. 

Such meticulous training and constant updating help in significantly lowering the WER, pushing the boundaries of what speech-to-text technologies can achieve.

Challenges in Using WER as an Evaluation Metric

Contextual Misinterpretations

While WER is a widely used metric to assess the accuracy of ASR systems, it often fails to capture the context of the conversation. 

This limitation becomes apparent in situations where homophones are involved—words that sound the same but have different meanings and spellings, such as “reed” and “red.” 

An ASR system might transcribe the word correctly according to the audio input but can fail in understanding the context in which it was used. 

This lack of natural language understanding can lead to discrepancies that are not necessarily reflected in the WER, posing significant challenges in real-time applications where contextual accuracy is crucial.

Handling Specialized Vocabulary

Another significant challenge is the transcription of proper nouns and specialized terminologies, particularly in fields like medicine or law, where accuracy is paramount. 

ASR systems, especially those not trained on specific jargon or names, tend to struggle with these terms. 

This results in higher WERs because the reference transcript used to measure the accuracy might have a very different set of vocabularies compared to what the ASR system was trained on.

Normalization and Non-Standard Speech

Normalization of data in speech recognition also poses a considerable challenge. ASR systems are typically trained on datasets that are meant to represent ‘standard’ speech, which does not always take into account regional accents, dialects, or colloquialisms. 

These variations can dramatically affect the system’s ability to accurately transcribe spoken words, resulting in a higher WER. 

Moreover, the reliance on a normalized form of speech can alienate users with accents and speech patterns that deviate from the so-called standard, limiting the usability of ASR technologies in global applications.

Experience Enhanced AI Voice Generation With PlayAI

If you’re interested in transforming text into incredibly realistic speech, PlayAI is a tool you’ll want to explore. 

This state-of-the-art text-to-speech (TTS) platform leverages advanced AI to generate lifelike audio from written content. 

Whether you’re developing e-learning materials, podcasts, or any other project that could benefit from high-quality voiceovers, PlayAI offers a seamless solution. The voices are not only realistic but also customizable to fit various needs and contexts. 

Curious to see how it can elevate your projects? Try PlayAI today and experience the future of AI voice generation for yourself.

What is the significance of ‘total number of words’ in calculating WER for an automatic speech recognition system?

The ‘total number of words’ in the reference transcript is crucial because it serves as the denominator in the WER calculation. This figure sets the scale for measuring the number of errors an automatic speech recognition system makes. 

By comparing errors (substitutions, insertions, deletions) to the total words spoken, developers can gauge the system’s accuracy, helping them understand and improve the system’s effectiveness.

How do ‘number of substitutions’ affect the ASR accuracy?

The ‘number of substitutions’—where the automatic speech recognition system incorrectly replaces a correct word with another—directly impacts ASR accuracy. 

A high number of substitutions can indicate issues with the system’s understanding of similar-sounding words or its vocabulary limitations. Reducing these errors is essential for enhancing the overall effectiveness and user satisfaction of the ASR system.

Can WER be used to evaluate audio files in machine translation and natural language processing applications?

While WER is primarily a common metric for speech recognition, its application extends to evaluating how well audio files are transcribed before being used in machine translation and natural language processing tasks. 

Accurate transcription is foundational for effective translation and processing, making WER a valuable tool for assessing preliminary steps in broader linguistic technology applications.

How does the quality of an audio file influence the error rate in an automatic speech recognition system?

The quality of an audio file significantly influences the error rate in speech recognition systems. Poor audio quality, characterized by noise or low clarity, can lead to a higher number of errors, reducing ASR accuracy. 

This is because the system struggles to distinguish speech from noise or to correctly interpret words from poorly recorded audio. Enhancing audio quality can thus directly improve the performance of ASR systems.

Recent Posts

Top AI Apps


Hammad Syed

Hammad Syed

Hammad Syed holds a Bachelor of Engineering - BE, Electrical, Electronics and Communications and is one of the leading voices in the AI voice revolution. He is the co-founder and CEO of PlayHT, now known as PlayAI.

Similar articles