External

Captioning Quality Audit

Quality is front and centre of everything we do at Ai-Media. Ai-Media’s caption quality is based on a robust, measurable and verifiable system consistent with international best practice.

Our caption quality is regularly measured by way of independent quality assessment, using NER* methodology. Audit reports are made available to our clients, and underpin our commitment to drive continuous improvement.

 

 

Quarter ended – YearNumber of programs sampledNER score AverageUptime greater than 99.9%
Sep-211399.4
Jun-211399.2
Mar-211399.4
Dec-201399.5
Sep-201399.8
Jun-201399.5
Mar-201399.6
Dec-191399.5
Sep-191399.3
Jun-191399.4
Mar-191399.4
Dec-181399.5
Sep-181399.5
Jun-181399.5
Mar-181399.3
Dec-171399.5
Sep-171399.5
Jun-171399.4
Mar-171399.6
Dec-161399.6
Sep-161399.4
Jun-161399.5
Mar-161399.6
Dec-151399.6
Sep-151399.3
Jun-151399.4
Mar-151399.3
Dec-141399.3
Sep-141399.2
Jun-141399.5
Mar-141399.1

 

*The NER model is a method of calculating the accuracy (quality) of live captions. NER stands for; Number, Edition error, and Recognition error.

A NER value of 100 indicates that the content was captioned entirely correctly.

The NER score can be calculated using the following formula:

 

{\displaystyle NERvalue={\frac {N-E-R}{N}}*100}

N (Number) = Total number of words in the live captions.

E (Edition error) = Edition error

R (Recognition error) = Recognition error

Skip to content