Levenshtein distance (edit distance) for words: minimum number of edits (insertion, deletions or substitutions) required to change the hypotheses sentence into the reference.
Range: greater than 0 (ref = hyp), no max range as ASR can insert an arbitrary number of words
$ WER = \frac{S+D+I}{N} = \frac{S+D+I}{S+D+C} $
S: number of substitutions, D: number of deletions, I: number of insertions, C: number of the corrects,
N: number of words in the reference ($N=S+D+C$)
WAcc (Word Accuracy) or Word Recognition Rate (WRR): $1 - WER$
Limitation: provides no details on the nature of translation errors
Different errors are treated equally, even thought they might influence the outcome differently (being more disruptive or more difficult/easier to be corrected).
If you look at the formula, there's no distinction between a substitution error and a deletion followed by an insertion error.
Possible solution proposed by Hunt (1990):
Use of a weighted measure
$ WER = \frac{S+0.5D+0.5I}{N} $
Problem:
Metric is used to compare systems, so it's unclear whether Hunt's formula could be used to assess the performance of a single system
How effective this measure is in helping a user with error correction
About: "based on the harmonic mean of unigram precision and recall (weighted higher than precision)"
Includes: exact word, stem and synonym matching
Designed to fix some of the problems found in the BLEU metric, while also producing good correlation with human
judgement at the sentence or segment level (unlike BLEU which seeks correlation at the corpus level).
Number of edits (words deletion, addition and substitution) required to make a machine translation match
exactly to the closest reference translation in fluency and semantics
TER = $\frac{E}{R}$ = (minimum number of edits) / (average length of reference text)
It is generally preferred to BLEU for estimation of sentence post-editing effort. Source.