Haven’t we all come across that one video where the captions are unintelligibly out of sync or oddly placed? It’s annoying, right? Especially if you’re banking on the captions wholly to help you understand what’s going on in the video.
While video producers are aware of the benefits of captioning videos, captions are often added as an afterthought. Inevitably, this makes them resort to options like getting captions crowd sourced (with inaccuracy rates of approximately 5% ) or using the automatic closed captioning features of a video editor (with inaccuracy rates of 5%-50%).
So, to put it simply, high-quality captions are those that are not added as an afterthought.
Now, the FCC regulations for closed captioning specifically recommend captions for all programs broadcast on television (including movies and music videos) and for only those videos shared on the internet that have earlier been broadcast on television by a public network (this could be anything from movies, music videos, to workout videos).
This, in turn, prompts many video producers to take the captioning regulations lightly even though captioning would be great for increasing their user base.
Characteristics and legal compliance
Honestly, out-of-sync and ill-placed captions will do you more harm than good.
To boost inclusivity, keep your viewers loyal and happy, and fulfill legal compliance conditions, invest time in rigorous quality control of your captions before uploading or broadcasting your videos.
According to the FCC regulations for closed captioning, all spoken words or song lyrics, background noises, and nuance must be sufficiently and comprehensively expressed through captions for inclusivity. Caption creators are also responsible for ensuring that the captions are accurate in terms of grammar, punctuation, and spelling, and even dialects and accents.
Essentially, high quality closed captions should able to aid in comprehension and enrich the video viewing experience of viewers who depend on such assistance.
The prime goal of including closed captions in videos is to provide viewers who are hard of hearing a comprehensive and clear viewing experience. But if the captions are either lagging behind due to the speed of the speech or skipping ahead before a speech starts, measures must be taken to adjust the appearance of the captions to add to the viewing experience.
The legal guidelines for closed captioning outline specific rules about the placement of captions in videos, detailing font type as well as size along with other best practices of caption placement.
Now, to improve the experience of your viewers, you should ensure that the captions do not block out important portions of the video, including faces, credits, or essential graphics. Line spacing (that is, ensuring that captions are not overlapping) and font color are also important factors that aid video comprehension.
It’s pretty much explicitly implied that captions should depict all aspects of the video from the beginning to the end, including all speech and sounds.
This relatively ambiguous instruction leads transcribers to interpret completeness quite literally, often leading to hilarious outcomes.
Which sounds are significant?
Completeness is definitely an aspect that contributes to the accuracy level of captions, but with completeness resulting in adverse outputs at times, you may be tempted to ask whether all sounds in a video are significant.
In this context, this Texas Tech University study raises some important questions about the rhetoric of captions:
What do captioners need to know about a text or plot in order to honor it? Which sounds are essential to the plot? Which sounds do not need to be captioned? How should genre, audience, context, and purpose shape the captioning act?
In an ideal scenario, following a video from the beginning can give you a fair idea of the plot of a video, and after a few minutes, visual cues can replace some of the most obvious captions.
Example 1: Imagine you’re watching a suspenseful scene whether the protagonist is panting. So this would make a caption such as “[X breathes heavily]” redundant. What would, in fact, add to the viewing experience is captioning background noise, for example, “[A door slams in the distance.].”
Example 2: In an online e-learning video, captioning details the course guide clearing their throat would be redundant since it wouldn’t add to the takeaway of the video.
Simply put, the dilemma of to caption or not to caption can be easily resolved by gauging the value-add of a captioned portion in a specific context.
Approximately 3 million adults (a staggering 11% of the population) in the U.S. “report some degree of hearing loss” according to an NIDCD study, which makes high quality closed captions an indispensable part of videos.
Moreover, a bulk of able-bodied users who are “situationally disabled” (for instance when in a noise-sensitive environment like a library or public transport) also benefit from accurately captioned videos.
To conclude, to caption or not to caption your videos should be speculations that are best left in the past. Instead, think about how you can go beyond compliance to provide the best possible viewing experience for your videos.
At iScribed we ensure 99% accuracy in closed captioning and video transcripts – let’s talk?