Watching the words go by – transcribing, spotting, captions or subtitles?

Delivering quality subtitles requires good knowledge of words, and not just the words that need to be translated. Because some technical terms are often used interchangeably, by both clients and translators, this can lead to confusion. Here are some definitions that should clarify some basic terms to help get you started.

Transcription or spotting?

Transcription is a textual rendering, word for word, of what is said in an audio recording. Usually, transcription is delivered in a simple text file (e.g., a Word document or PDF) and does not require the use of a specific tool. Transcription can be useful for meetings, radio shows and podcasts for a number of reasons beyond simply creating a written record. For instance, having a transcript of a podcast on your website can serve search engine optimization (SEO) purposes. Since it does not include precise timecoding, simple transcription is often not adequate for use with video.

Video transcription requires a specific type of timecoding called spotting. Spotting accurately determines when and for how long a subtitle is shown. The dialogue (and, in the case of captioning, any important background sounds) is timecoded and segmented so that it can be read on screen, while still adhering to per-line character limits, line breaks and other standards. Unlike transcription, spotting is carried out using special software, such as Amara or Aegisub, that enables a user to generate .srt files (or similar file types) with time codes.

Captions or subtitles?

The use of the word “subtitle” as a catch-all term for dialogue transcribed on-screen is so widespread that subtitling is sometimes confused with captioning. But when you watch a movie in English with English subtitles, you might have noticed that you see more than just dialogue on screen. That’s because you are actually watching captions.

Captions include a transcript of the spoken dialogue and a description of important non-speech sounds which can be heard in the video. For instance, when a telephone rings in the background or an actor laughs, those sounds are transcribed as well. Captions are generally intended for the hearing impaired and can either be closed captions (CC) that can be turned on and off, or open captions, which are hard-coded into the video. The creation of captions requires spotting, and, as such, Supertext delivers captions to clients in .srt files.

Subtitles are a transcript of spoken dialogue only. Choosing to subtitle a video assumes that the viewers can hear the content but do not understand the language. This is why you generally see subtitles being used to adapt dialogue from a foreign language. The dialogue is translated by a native speaker who will adapt it to their mother tongue. The difficulty lies in capturing the same meaning as the original and providing a comfortable reading experience while still respecting time codes. The best service in this case, then, is transcreation rather than simple translation.

There are different methods of creating subtitles. The first method is to have spotting carried out in the original language by a native speaker and then have the .srt file translated. In this case, the timecoding of the original language can also be used for the subtitles in the original language on those occasions when the inclusion of background sounds is not desired. The second process is to have the spotting done directly by the translator, both translating and spotting at the same time.

Whether you need a transcription, subtitles or captions for your video – we roll out the red carpet for you.

Cover image via Twenty20

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *