Blog RSS Feed

FCC Addresses Closed Captioning Accuracy

What does the FCC Report and Order (CG docket No. 05-231) mean when it states that closed captioning needs to be accurate? Hasn’t this always been a requirement? Well, not exactly. Previously, there were regulations simply stating that closed captioning was required. However, without addressing quality, closed captioning varied in regards to accuracy. Even without closed captioning accuracy regulations enforced from 2009 to 2013, the Commission received 2,323 viewer complaints on general closed captioning issues. A dubious representation of the actual problem since, until now, there really was no motivation for the viewers to voice their concerns.

In a few weeks, all post-production closed captioning of video programming must be captioned by an offline caption editor. This offline or post-production captioner is trained on various captioning rules, such as correct punctuation and spelling, synchronicity, caption placement, reading speed, etc. In the past, a live captioning style (writing with a steno machine and paraphrasing the spoken word) could be used for post-produced programs even though they were not actually airing in a live format. However, come January 15, 2015, this will no longer be acceptable.

According to the report, closed captioning must now be verbatim. This means closed captioning must match the spoken words precisely; paraphrasing is not an option. If a song with lyrics is playing in the program, those lyrics must be captioned. All nonverbal information such as sound effects, audience reactions, speaker identification, and background noise relevant to the meaning of the program will also be captioned.

Still have questions regarding closed captioning accuracy or the new FCC requirements? Contact one of the experts at Aberdeen Broadcast Services.

New FCC Closed Captioning Laws to Focus on Quality

The deadline of January 15, 2015 is quickly approaching as the FCC closed captioning laws will begin weighing in on some much needed quality issues. Aberdeen Broadcast Services is here to help break down the new FCC Report and Order that was released earlier this year (CG docket No. 05-231) into easy-to-understand and concise guidelines. Our goal is to help producers enter the New Year confident that their programs are in compliance with the new FCC closed captioning requirements.

The FCC issued its first set of closed captioning requirements over sixteen years ago in order to provide telecommunications for the deaf and hard of hearing. The objective was to ensure that all Americans have access to video programming. Mandating that programs had closed captions was a great start at accessibility, but quality control was the next step as the original rules were fairly basic–closed captioning needed to be present on the screen. Now, the FCC has adopted captioning quality standards and technical compliance rules to certify that the quality of captions best replicate the experience of television programs for all audiences.

Quality closed captioning is the result of teamwork and compliance between video programmers and captioning vendors. The Commission offers a list of operational best practices to be followed to ensure the highest level of closed captioning is obtained. These suggestions include providing vendors with advance scripts, proper names, song lyrics, as well as quality audio on their videos. Pre-recorded shows should be captioned by offline captioners and spot-checked before and during broadcast to ensure there are no closed captioning issues.

Closed captioning vendors are also provided a list of best practices to follow to ensure captioning is verbatim and free of errors. The new quality standards focus on four key areas: accuracy, synchronicity, completeness, and placement. A basic overview of these areas is as follows:

  • Closed captioning must match the spoken words in the original language without paraphrasing. Song lyrics and nonverbal information, such as the identity of the speaker and any sound effects or audience reactions present in the program, will be captioned.
  • Captions need to be accurately synchronized to match the video and audio content displayed at a readable speed.
  • Captions are required to be complete and present through the full length of the program.
  • Lastly, proper placement dictates that captions cannot block important visual content such as speaker’s faces and any graphics or text on the screen.

Don’t risk uncertainty and gamble on the possibility of rejected content. Contact Aberdeen Broadcast Services with any questions. For more information on FCC closed captioning laws, visit: Closed Captioning Quality Report and Order, Declaratory Ruling, FNPRM.

Voice-Over vs. Dubbing: Two Sides of the Same Tone

If you’ve ever watched an old noir film—the ones where the troubled narrator rambles on about his dire circumstances in worried existential grief—then you’re probably familiar with voice-over. Employed through various ways in cinema, for which it’s garnered iconicism in pop culture today, the technique actually has a more common, practical use in day-to-day news and radio.

When an interviewee speaks a foreign language, production companies typically use voice actors to record over the original audio. This way, the viewer hears the interviewee in the background speaking his or her language, while the voice actor interprets. In most cases, the volume of the voice actor is much louder and lags seconds behind the original audio track. This voice-over technique is useful because it allows the viewer to both hear and understand the speaker’s words at the same time. This is typically referred to as UN-Style voice over.

Another audiovisual process is called dubbing. Not to be confused with the electronic music genre (yes, that one), dubbing is when all the elements of sound are mixed including the original production track with any additional recordings; joined together, they make a complete soundtrack. In the video production world, the phrase “dubbing” is used when the original speaker’s audio track is replaced entirely by the voice actor’s. Contrary to voice-over (UN-Style), which preserves the original track underneath the voice actor, the dub must be carefully timed and synchronized to match the speaker’s lips, meaning, and even intonations.  To be more specific, this is often referred to as lip-sync dubbing. As imagined, this process is arduous and lengthy; often times, the voice actor is required to work with editors in a studio re-recording segments where the audio and visuals struggle to match.

Looking for someone to do your voice-over/dubbing work? Aberdeen wants to take care of it for you! For more information, please visit: AberLingo Language Services.

The Writing’s on the Glass: Instant Captions for Daily Interactions

What if captioning wasn’t limited to multimedia and entertainment mediums and could transcend, by way of tech devices, into day-to-day interactions with others? It’s what app developers at Georgia Tech are trying to accomplish—real-time captions of real-life conversations. The question is, does it really work?

Some are calling it “instant captions” – a concept rarely synonymous with accuracy (except when real-time captioners are involved). All you need is a smart phone and Google Glass (the Explorer model goes for a measly $1,500). One need only speak into a smartphone microphone, and the app turns spoken dialogue into a transcript. That transcript is converted into captions displayed on the user’s glassware. There’s a slight delay, but the auditory features of Glass appear accurate and (arguably) promising.

One problem is convenience. Will the deaf and hard of hearing community mind wearing the device whenever he or she is out and about? More concerning is the dependence on a smartphone; users will need to hold their phones up to others in order to generate captions, which poses a challenge in social settings (really though, it’s no different from journalists wielding audio recorders for an interview).

Consensus: For now it looks like it’s best suited for interactions with friends and family, at least until the smartphone is no longer required for setup. Perhaps the smartphone will not be needed for this when the microphone on Google Glass is improved.

Looking for live captioning services? For more about our real-time captioning, check out Live Captioning | Aberdeen Broadcast Services.

Where’s the Closed Captioning?

Candidates in the Maryland gubernatorial campaigns created some distress amongst viewers when they recently aired their television campaign ads without any closed captioning. These ads occurred during a forum hosted by The National Federation for the Blind that focused on employment, housing, and transportation for individuals with disabilities. When questioned as to why there were no captions, Lt. Gov. Brown told people that cost was a factor and that his campaign lacked the necessary funds.  Another candidate was unaware as to why there were no captions but agreed that it was something that needed to be changed in the future.  This failure to provide closed captioning is not limited to Maryland. Candidates running in Pennsylvania were also accused of not captioning their ads, and thus not reaching a large percentage of voters. Technically, the FCC does not require these ads to be closed captioned but these candidates would certainly benefit from providing closed captioning in order to spread their message and reach all potential voters.

Do you have a video that needs closed captioning? Or do you have any questions regarding FCC regulations, different types of closed captioning, multi-language and translation services, or potential file delivery options? For more information visit http://abercap.com/contact/

5 Ways to (Not) Screw Up Your Digital Video for Broadcast Television

Unfortunately professional broadcast video production is not as easy as uploading a video from your iPhone to your YouTube account. Creating a quality program for broadcast television is extremely detailed and technical, which requires thousands of dollars of high tech equipment and many years of experience to ensure each facet of the process is completed with the highest level of quality and skill. Below are five easy ways to (not) screw up your video. Hopefully you can keep these tips in mind the next time you hit ‘Export’ and therefore improve the quality of your next broadcast.

#1: Keep it Native

Quality loss can come in many forms and can be even worse with digital video than the traditional ‘generation loss’ from dubbing tape to tape if not done correctly. A big mantra here at Aberdeen is to “Keep it Native.” This means maintaining the same format and specifications from shooting, through post, and to the final deliverable.  Taking time to make sure you keep the highest level of quality through each step of the production and post processes will ensure there is minimal degradation on the finished program.

It all starts in the camera. It is best practice to choose the appropriate resolution and aspect ratio that best fits the highest level of current (and possibly future) broadcast television outlets. Usually these days this means choosing the right HD format and maintaining that 1080 or 720 resolution and fielding throughout the post process. Once the content is acquired in the desired format, the next step is to get the assets into your editing system without quality loss.  The brand of recording/editing equipment that is in place will dictate which codecs should be used. Remain true to our “Native” mantra by keeping the frame rate, field order, and resolution the same as the acquired format no matter the codec. This will ensure the highest quality finished product.

#2: Mixed Media on the Same Timeline/Sequence

Field Dominance issues are one of the most common problems we see at Aberdeen these days.

With a lot of legacy footage shot in standard definition (SD) and the increased adoption to high definition (HD) cameras, it is very common to see both SD and HD originated content in the same program. Fielding issues arise because SD footage can commonly be lower field dominant whereas HD footage is only upperfield or progressive. This field priority discrepancy can cause a playback issue because the SD footage (often lowerfield) needs to be played in the opposite field order as HD content. Today’s NLE exports are based on the sequence settings, not the individual clip settings. Two clips with opposite field orders on the same sequence will almost guarantee one of them will look poor on export. It is the editor’s job to correctly convert each clip to the same field order of the sequence or there will be motion and resolution issues with the incorrectly exported video. Ghosting, flicker, and motion judder are all signs of incorrect or incompatible field order of the baseband video. This issue is often overlooked as LCD monitors have replaced the older, more expensive interlaced CRT monitors. LCDs use a progressive scan technology that will not accurately display your interlaced footage like a CRT will.

#3: Center-cut Safe HD

Every HD station/network broadcasts simultaneously both a HD and a SD signal from a single source video element (commercial, promo, feature, etc.). If the originating source video is SD then the simultaneous HD feed will up-convert the SD source (usually adding pillar bars to the sides of the 4×3 video) for the HD viewers. For HD source elements the station’s SD feed will get an automatic down-convert of the originating HD format for their SD feed. Here’s where the issue comes up. Stations want their SD viewers to see a 4×3 full screen program, not letterbox content. To that end, all HD source programs are automatically down-converted by center-cutting the HD source for the SD feed. This means that any graphics or visual content outside of the 4×3 raster will be cut off. This process should be considered when creating and positioning HD graphics. Incorrectly positioned graphics (outside the 4×3 raster) end up being a commonly overlooked issue that can result in the program being rejected by the station or airing with cutoff content  on 4×3 TVs because the HD program graphics are not center-cut safe for the network’s down-converted SD signal.

#4:  Broadcast Legal Chroma/Luma/Gamut

Television stations still mandate strict values for Chroma, Luma and RGB Gamut. Today’s cameras are not restricted to the color and brightness values that are required by these television broadcasters. In order to have your program accepted by the station’s ingest operators, it is essential to utilize the NTCS/Broadcast Safe filters available on every NLE system. Generally using the most conservative preset with no values over 100 will correctly adjust the hot signals that modern cameras capture.

#5:  Audio Peaks and Loudness

Recording and mixing digital audio to the 0dB level is far too strong. Broadcast television outlets will not accept anything so “hot.” Here’s why: Once an audio signal passes the 0dB threshold the signal can no longer be captured. This is referred to as “clipping.” There is no regaining the lost audio information resulting from clipping. To prevent this issue it is best practice to not drive the audio to the 0dB limit. Digital headroom is the term used to signify that the audio peak level has been lowered below the 0dB point. Usually broadcasters recommend at least 6dB of headroom to prevent their broadcasts from clipping. This means that the program peaks should not meter/register above -6dB.

*Engineer’s Best Tip* Audio loudness is now being monitored closely thanks to a US Congressional law (CALM Act) which now mandates US broadcasters comply with the ATSC A/85 audio standard of -24LKFS +/- 2dB. In our analysis of 25,000 files, more than 90% of programs that had audio peaks mixed to between -8 and -10dB had a loudness measurement that was inside this -24LKFS +/-2 dB legal range. This is an easy ballpark indicator to know you should be in or very close to compliance of this new mandatory standard.

Looking for help with your file delivery or digital encoding?  Send us an email here for more information or check out our AberFast Division online here.

Attention: New FCC Captioning Regulations!

On July 11, the Federal Communications Division approved new rules that will require closed captioning of video clips that are posted online. It is basically an amendment to the previous regulation that required full-length videos to contain closed captioning. Now, if the original video aired on television with captions, then any short segment or clip shown on the Internet must also contain closed captioning. As of January 2016, “straight lift” clips taken directly from one program must have closed captions, but montages of clips taken from multiples programs will have to meet the July 2017 deadline. For more information on FCC captioning requirements, please visit the Federal Communications Commission website.

EIA-708 Turns 12

It’s been twelve years since the FCC ushered in a new era of closed captioning.

On July 2, 2002, the Federal Communications Commission mandated all Digital Televisions include an EIA-708 caption decoder, adding new features to viewers who want to change the captions’ font, color, and size according to preference—an advance in the captioning world comparable to the leap in Television from Monochrome to Technicolor tube sets.

In addition to altering the text, EIA-708 has eight windows with fewer constraints than EIA-608, the original (and primitive) standard for closed captioning that preceded digital during the analog era. These windows provide added freedom when positioning captions to a specific location, which helps when a viewer wants to move captions around graphics on screen.

For more information about the differences between 608 and 708 captions, check out: The Basics of 608 vs. 708 Captions.

Add Captions, Expand Your Audience

If providing access to the deaf and hard of hearing lacks incentive, will more YouTube views persuade you? YouTube creators today are forced to look for new and exciting ways to attract viewers. Meanwhile, it’s only getting harder to stand out in the ever-growing, over competitive, viral hungry, trend hopping, video-sharing ecosystem that is YouTube (see the video-host’s latest press release for an idea—the numbers are staggering, unsurprising, and deservedly proud—boasting that over 6 billion hours of YouTube is watched monthly).

Although intended for the deaf and hard of hearing community, captions are providing lesser-known secondary benefits to an unlikely recipient: YouTubers. Content creators are using captions, and they’re doing it for more hits. With rewarding incentives from YouTube, creators are reaping the benefits of “popularizing” their content by tapping into larger audiences—albeit, in a few unlikely places.

Your potential viewers may not suffer from hearing impairments, but they sure value captions. Consider places where audio access is limited: the workplace or a library. Depending on the subject of your content, the video may be densely packed with information and details—an interview with rapid fire discussion, or a self-help walkthrough explained over a series of steps. English learners and students value captions to increase engagement with the video and improve overall comprehension. In turn, you’ll be opening the door to new viewers.

Arguably, the most attractive benefit captions bring to videos is the assimilation of your content into Google’s search results. Any captioning your video contains will be indexed by Google, allowing others to discover your videos much easier while surfing the web.  But beware of automatic captions: YouTube provides them, but they generate text by speech recognition technology, so inaccuracies abound. Since automatic captions are prone to inaccuracies and reflect your content poorly, they will not index to Google. YouTube captions are primitive, but they can save you a lot of time in the long run: turn Google’s automatic captions on and edit the transcript manually. You may prefer this method over captioning the whole video from scratch.

If you manage to create captions for your video, you have the foundation to elevate your content to an international level. Once you have an English transcript, you can translate text into subtitles to open your video to a large and avid market of global users (80% of YouTube’s views are outside the U.S.).

On a related note, our friend Jamie Berke, author of the Caption Action 2 blog, recently informed Aberdeen of a growing fraud where users are circumventing YouTube’s caption incentives. You can read all about it at Jamie’s blog post: Watch Out for Fake Captioning! 

AberLingo: Translation Services to Meet Global Needs

Did you know that the official language of Ethiopia is called Amharic? Aberdeen Broadcast Services recently worked on a subtitle project that was translated into Amharic. Our subtitle editor was excited to see the new font and how different and beautiful the Amharic characters looked. Amharic is just one of nearly 70 languages that we have translated. For more information on our Aberlingo department and our translation and multi-language subtitle and voice over services, visit http://abercap.com/services/language/