Blog

Voice-Over vs. Dubbing: Two Sides of the Same Tone

If you’ve ever watched an old noir film—the ones where the troubled narrator rambles on about his dire circumstances in worried existential grief—then you’re probably familiar with voice-over. Employed through various ways in cinema, for which it’s garnered iconicism in pop culture today, the technique actually has a more common, practical use in day-to-day news and radio.

When an interviewee speaks a foreign language, production companies typically use voice actors to record over the original audio. This way, the viewer hears the interviewee in the background speaking his or her language, while the voice actor interprets. In most cases, the volume of the voice actor is much louder and lags seconds behind the original audio track. This voice-over technique is useful because it allows the viewer to both hear and understand the speaker’s words at the same time. This is typically referred to as UN-Style voice over.

Another audiovisual process is called dubbing. Not to be confused with the electronic music genre (yes, that one), dubbing is when all the elements of sound are mixed including the original production track with any additional recordings; joined together, they make a complete soundtrack. In the video production world, the phrase “dubbing” is used when the original speaker’s audio track is replaced entirely by the voice actor’s. Contrary to voice-over (UN-Style), which preserves the original track underneath the voice actor, the dub must be carefully timed and synchronized to match the speaker’s lips, meaning, and even intonations.  To be more specific, this is often referred to as lip-sync dubbing. As imagined, this process is arduous and lengthy; often times, the voice actor is required to work with editors in a studio re-recording segments where the audio and visuals struggle to match.

Looking for someone to do your voice-over/dubbing work? Aberdeen wants to take care of it for you! For more information, please visit: http://abercap.com/services/language/voice-dubbing/

 

 

The Writing’s on the Glass – Instant Captions for Daily Interactions

What if captioning wasn’t limited to multimedia and entertainment mediums, and could transcend, by way of tech devices, into day-to-day interactions with others? It’s what app developers at Georgia Tech are trying to accomplish—real-time captions of real-life conversations. The question is, does it really work?

Some are calling it “instant captioning”—a concept rarely synonymous with accuracy (except when real-time captioners are involved). All you need is a smart phone and Google Glass (the Explorer model goes for a measly $1,500). One need only speak into a smartphone microphone, and the app turns spoken dialogue into a transcript. That transcript is converted into captions displayed on the user’s glassware. There’s a slight delay, but the auditory features of Glass appear accurate and (arguably) promising.

One problem is convenience. Will the deaf and hard of hearing community mind wearing the device whenever he or she is out and about? More concerning is the dependence on a smartphone; users will need to hold their phones up to others in order to generate captions, which poses a challenge in social settings (really though, it’s no different from journalists wielding audio recorders for an interview).

Consensus: For now it looks like it’s best suited for interactions with friends and family, at least until the smartphone is no longer required for setup. Perhaps the smartphone will not be needed for this when the microphone on Google Glass is improved.

Looking for live captioning services? For more about our real-time captioning, check out http://abercap.com/services/captioning/live/.

Where’s the Closed Captioning?

Candidates in the Maryland gubernatorial campaigns created some distress amongst viewers when they recently aired their television campaign ads without any closed captioning. These ads occurred during a forum hosted by The National Federation for the Blind that focused on employment, housing, and transportation for individuals with disabilities. When questioned as to why there were no captions, Lt. Gov. Brown told people that cost was a factor and that his campaign lacked the necessary funds.  Another candidate was unaware as to why there were no captions but agreed that it was something that needed to be changed in the future.  This failure to provide closed captioning is not limited to Maryland. Candidates running in Pennsylvania were also accused of not captioning their ads, and thus not reaching a large percentage of voters. Technically, the FCC does not require these ads to be closed captioned but these candidates would certainly benefit from providing closed captioning in order to spread their message and reach all potential voters.

Do you have a video that needs closed captioning? Or do you have any questions regarding FCC regulations, different types of closed captioning, multi-language and translation services, or potential file delivery options? For more information visit http://abercap.com/contact/

Five Ways to (Not) Screw Up Your Digital Video

Unfortunately Professional Broadcast Video Production is not as easy as uploading a video from your iPhone to your YouTube account.  Creating a quality program for broadcast television is extremely detailed and technical, which requires thousands of dollars of high tech equipment and many years of experience to ensure each facet of the process is completed with the highest level of quality and skill.  Below are five easy ways to (not) screw up your video.  Hopefully you can keep these tips in mind the next time you hit ‘Export’ and therefore improve the quality of your next broadcast.

#1: Keep it Native

Quality loss can come in many forms and can be even worse with digital video than the traditional ‘Generation Loss’ from dubbing tape to tape if not done correctly.  A big mantra here at Aberdeen is to “Keep it Native”.  This means maintaining the same format and specifications from shooting through Post to the final deliverable.  Taking care to make sure you keep the highest level of quality through each step of the Production and Post processes will ensure there is minimal degradation on the finished program.

It all starts in the camera.  It is ‘best practice’ to choose the appropriate resolution and aspect ratio that best fits the highest level of current (and possibly future) broadcast outlets.  Usually these days this means choosing the right HD format and maintaining that 1080 or 720 resolution and fielding throughout the post process.  Once the content is acquired in the desired format, the next step is to get the assets into your editing system without quality loss.  The brand of recording/editing equipment that is in place will dictate which codecs should be used.  Remain true to our “Native” mantra by keeping the Frame Rate, Field Order, and Resolution the same as the acquired format no matter the codec. This will ensure the highest quality finished product.

#2: Mixed Media on the Same Timeline/Sequence

Field Dominance issues are one of the most common problems we see at Aberdeen these days.

With a lot of legacy footage shot in Standard Definition (SD) and the increased adoption to High Definition (HD) cameras it is very common to see both SD and HD originated content in the same program.  Fielding issues arise because SD footage can commonly be lower field dominant whereas HD footage is only upperfield or progressive.  This field priority discrepancy can cause a playback issue because the SD footage (often lowerfield) needs to be played in the opposite field order as HD content.  Today’s NLE exports are based on the sequence settings, not the individual clip settings.  Two clips with opposite field orders on the same sequence will almost guarantee one of them will look poor on export.  It is the editor’s job to correctly convert each clip to the same field order of the sequence or there will be motion and resolution issues with the incorrectly exported video.  Ghosting, flicker and motion judder are all signs of incorrect or incompatible field order of the baseband video.  This issue is often overlooked as LCD monitors have replaced the older, more expensive interlaced CRT monitors.  LCDs use a progressive scan technology that will not accurately display your interlaced footage like a CRT will.

#3: Centercut Safe HD

Every HD station/network broadcasts simultaneously both a HD and a SD signal from a single source video element (commercial, promo, feature, etc).  If the originating source video is SD then the simultaneous HD feed will upconvert the SD source (usually adding pillar bars to the sides of the 4×3 video) for the HD viewers.  For HD source elements the station’s SD feed will get an automatic downconvert of the originating HD format for their SD feed.  Here’s where the issue comes up.  Stations want their SD viewers to see a 4×3 full screen program, not letterbox content. To that end all HD source programs are automatically downconverted by center-cutting the HD source for the SD feed.  This means that any graphics or visual content outside of the 4×3 raster will be cut off.  This process should be considered when creating and positioning HD graphics. Incorrectly positioned graphics (outside the 4×3 raster) end up being a commonly overlooked issue that can result in the program being rejected by the station or airing with cutoff content  on 4×3 TVs because the HD program graphics are not centercut safe for the network’s downconverted SD signal.

#4:  Broadcast Legal Chroma/Luma/Gamut

Television stations still mandate strict values for Chroma, Luma and RGB Gamut.  Today’s cameras are not restricted to the color and brightness values that are required by these television broadcasters.  In order to have your program accepted by the station’s ingest operators, it is essential to utilize the NTCS/Broadcast Safe filters available on every NLE system.   Generally using the ‘Most Conservative’ preset with no values over 100, will correctly adjust the hot signals that modern cameras capture.

#5:  Audio Peaks and Loudness

Recording and mixing digital audio to the 0dB level is far too strong.   Broadcast outlets will not accept anything so “hot”.  Here’s why:  Once an audio signal passes the 0dB threshold the signal can no longer be captured.  This is referred to as “clipping”.  There is no regaining the lost audio information resulting from clipping. To prevent this issue it is best practice to not drive the audio to the 0dB limit.  Digital headroom is the term used to signify that the audio peak level has been lowered below the 0dB point.  Usually broadcasters recommend at least 6dB of headroom to prevent their broadcasts from clipping.  This means that the program peaks should not meter/register above -6dB.

*Engineer’s Best Tip: Audio “loudness” is now being monitored closely thanks to a US Congressional law (The Calm Act) which now mandates US broadcasters comply with the ATSC A/85 audio standard of
-24LKFS +/- 2dB.  In our analysis of 25,000 files, more than 90% of programs that had audio peaks mixed to between -8 and -10dB had a loudness measurement that was inside this -24LKFS +/-2 dB legal range.  This is an easy ballpark indicator to know you should be in or very close to compliance of this new mandatory standard.

 

Looking for help with your file delivery or digital encoding?  Send us an email here for more information or check out our AberFast Division online here.

Attention: New FCC Regulations!

On July 11, the Federal Communications Division approved new rules that will require closed captioning of video clips that are posted online. It is basically an amendment to the previous regulation that required full-length videos to contain closed captioning. Now, if the original video aired on television with captions, then any short segment or clip shown on the Internet must also contain closed captioning. As of January 2016, “straight lift” clips taken directly from one program must have closed captions but montages of clips taken from multiples programs will have to meet the July 2017 deadline. For more information, please visit http://www.fcc.gov/.

EIA-708 Turns 12

It’s been twelve years since the FCC ushered in a new era of closed captioning.

On July 2, 2002, the Federal Communications Commission mandated all Digital Televisions include a 708 caption decoder, adding new features to viewers who want to change the captions’ font, color, and size according to preference—an advance in the captioning world comparable to the leap in Television from Monochrome to Technicolor tube sets.

In addition to altering the text, 708 has eight windows with less constraints than EIA-608, the original (and primitive) standard for closed captioning that preceded digital during the analog era. These windows provide added freedom when positioning captions to a specific location, which helps when a viewer wants to move captions around graphics on screen.

For more information about the differences between 608 and 708 captions, check out: http://abercap.com/blog/2009/06/18/the-basics-of-608-vs-708-captions/.

Add Captions, Expand Your Audience

If providing access to the deaf and hard of hearing lacks incentive, will more YouTube views persuade you? YouTube creators today are forced to look for new and exciting ways to attract viewers. Meanwhile, it’s only getting harder to stand out in the ever-growing, over competitive, viral hungry, trend hopping, video-sharing ecosystem that is YouTube (see the video-host’s latest press release for an idea—the numbers are staggering, unsurprising, and deservedly proud—boasting that over 6 billion hours of YouTube is watched monthly).

Although intended for the deaf and hard of hearing community, captions are providing lesser-known secondary benefits to an unlikely recipient: YouTubers. Content creators are using captions, and they’re doing it for more hits. With rewarding incentives from YouTube, creators are reaping the benefits of “popularizing” their content by tapping into larger audiences—albeit, in a few unlikely places.

Your potential viewers may not suffer from hearing impairments, but they sure value captions. Consider places where audio access is limited: the workplace or a library. Depending on the subject of your content, the video may be densely packed with information and details—an interview with rapid fire discussion, or a self-help walkthrough explained over a series of steps. English learners and students value captions to increase engagement with the video and improve overall comprehension. In turn, you’ll be opening the door to new viewers.

Arguably, the most attractive benefit captions bring to videos is the assimilation of your content into Google’s search results. Any captioning your video contains will be indexed by Google, allowing others to discover your videos much easier while surfing the web.  But beware of automatic captions: YouTube provides them, but they generate text by speech recognition technology, so inaccuracies abound. Since automatic captions are prone to inaccuracies and reflect your content poorly, they will not index to Google. YouTube captions are primitive, but they can save you a lot of time in the long run: turn Google’s automatic captions on and edit the transcript manually. You may prefer this method over captioning the whole video from scratch.

If you manage to create captions for your video, you have the foundation to elevate your content to an international level. Once you have an English transcript, you can translate text into subtitles to open your video to a large and avid market of global users (80% of YouTube’s views are outside the U.S.).

On a related note, our friend Jamie Berke, author of the Caption Action 2 blog, recently informed Aberdeen of a growing fraud where users are circumventing YouTube’s caption incentives. You can read all about it at Jamie’s blog: http://captionaction2.blogspot.com/2014/05/watch-out-for-fake-captioning.html.

 

See how Aberdeen can help you with your captions – click here!

AberLingo: Translation Services to Meet Global Needs

Did you know that the official language of Ethiopia is called Amharic? Aberdeen Broadcast Services recently worked on a subtitle project that was translated into Amharic. Our subtitle editor was excited to see the new font and how different and beautiful the Amharic characters looked. Amharic is just one of nearly 70 languages that we have translated. For more information on our Aberlingo department and our translation and multi-language subtitle and voice over services, visit http://abercap.com/services/language/

“God’s Not Dead”

This year Aberdeen Broadcast Services was excited to provided captioning and Spanish subtitling services for Pure Flix’s new film, “God’s Not Dead.” The story is about a college freshman who is challenged by his Philosophy professor to write a paper that will disavow the existence of God. He refuses. The professor then assigns him the task of proving God’s existence in a debate in front of the class in order to pass the class. The film stars Kevin Sorbo, Shane Harper, Dean Cain, and has special appearances by Willie and Korie Robertson (Duck Dynasty).  “God’s Not Dead” is playing in theaters across the country. Visit http://godsnotdeadthemovie.com/theaters to select a theater near you.

FCC Fast Lanes or the Highway

The Federal Communications Commission stirred up a hot debate this past week when it proposed net neutrality laws that would give Internet providers permission to charge companies for faster broadband, or “fast lanes.”   The new rules garnered opposition from the public and service providers alike, with some foretelling the collapse of “open Internet.” Tom Wheeler, the FCC Chairman sees it differently. He affirmed the nation’s tenet that all Internet traffic is created equal and assured skeptics the proposal supports net neutrality, not user discrimination. Back in February, Chairman Wheeler announced new improvements to closed captioning standards after expressing frustration in the agency’s lack of quality control since it became law in 1996. For now the debate rages on. After Thursday’s 3-2 vote to move forward with the proposal, the public has 120 days to comment on the matter. You can share your suggestions at Fcc.gov.

 

 

This blog was written by David Schmidt, one of Aberdeen’s detailed oriented Operations Administrators. He joined the Aberdeen team in 2013 and loves the close-knit community (including the many pranksters and shenanigans around the office). Married in October 2013, he is enjoying his new life (and wife!) in Tustin, California. He is an Orange County native, and received his English Writing BA from Biola University. In his spare time, he enjoys creative writing, classic cinema, and learning to cook.