Comments (3)
You're right - permutations of the input list do give different results due to the concatenation of all elements of the list into a single string over which the edit distance is computed.
I initially designed this library for evaluating meeting transcriptions. This has a baked-in assumption that multiple sentences are ordered, and share context. I ran into the problem that the transcription of the meeting (as a list of sentences) and the hypothesis of a speech-to-text engine (as a list of sentences) did not overlap nicely, e.g the lists had different lengths. However, for this use-case, the implementation as-is is desired, as you would never permute the lists.
I agree that when you're evaluating, say librispeech, you would want invariance when permuting the (gt, hypothesis) tuples.
Do you have any suggestions for improving the current API?
from jiwer.
Thanks for you reply. I understand the original purpose of the library, and that this library is not suitable for evaluating WER over a set of utterances like librispeech.
I am therefore surprised that several implementations (here, here, here, ...) use the wer function of this library for their evaluation.
It might be important to make it clear that the wer function of this library is not suitable for that purpose.
When evaluating on a set of utterances like librispeech, the edit distance should be computed independently for each utterance. In the case of a test set of short utterances, the performance difference is significant and unfair when comparing against approaches that use the correct implementation.
The following function computes the edit distance independently for each utterance:
def wer(
truth: List[str],
hypothesis: List[str],
truth_transform: Union[tr.Compose, tr.AbstractTransform] = _default_transform,
hypothesis_transform: Union[tr.Compose, tr.AbstractTransform] = _default_transform,
**kwargs
) -> float:
"""
Calculate word error rate (WER) between a set of ground-truth sentences and
a set of hypothesis sentences.
:return: WER as a floating point number
"""
# raise an error if the number of ground-truth sentences and the number of
# hypothesis sentences differ.
if len(truth) != len(hypothesis):
raise ValueError("the number of ground-truth sentences and the number "
"of hypothesis sentences differ")
hits, substitutions, deletions, insertions = 0, 0, 0, 0
for (truth_sample, hypothesis_sample) in zip(truth, hypothesis):
m = compute_measures(
truth_sample, hypothesis_sample, truth_transform, hypothesis_transform, **kwargs
)
hits += m["hits"]
substitutions += m["substitutions"]
deletions += m["deletions"]
insertions += m["insertions"]
error = substitutions + deletions + insertions
total = substitutions + deletions + hits
return error / total
from jiwer.
This issue should be fixed from version 2.3.0 onwards.
from jiwer.
Related Issues (20)
- WER score bigger than 1.0 HOT 2
- Question: How can I get words alignment between ground_truth and hypothesis? HOT 3
- module 'jiwer.transforms' has no attribute 'ReduceToListOfListOfWords' HOT 2
- Don't support Chinese? HOT 4
- AttributeError: module 'jiwer' has no attribute 'cer'
- SentencesToListOfWords is removed after 2.2.0 HOT 8
- RemovePunctuation does not remove smart/curly quotes HOT 2
- Avoid error when a string in the truth is empty after transformation HOT 2
- Alignment options similar to `fstalign` HOT 1
- Batch vs Individual results are not same HOT 6
- Update Levenshtein dependency to maintained version
- Major performance regression in 2.5.0 for jiwer.transforms.RemovePunctuation HOT 2
- jiwer WER runs very fast , compared to Torchmetrics WER how? HOT 1
- Current licenses might not be allowed HOT 2
- jiwer.visualize_measures doesn't work as in the docs HOT 2
- Version 3.0.0 can produce wrong results HOT 1
- Regarding visualize_alignment() function. HOT 1
- Apparent WER bug? HOT 2
- Update rapidfuzz version HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jiwer.