In this edition of Industry Spotlight, we take a look at two innovative companies – Encoding.com and Beamr – that have joined forces to improve cloud-based video compression and delivery.
Encoding.com provides high-volume, cloud-based VOD processing, and packaging services at scale for some of the world’s leading Media & Entertainment companies, including NBCUniversal, BBC, WarnerMedia, Discovery, and Fox.
The company supports all next-generation video formats, including Hybrid Log-Gamma (HLG), HDR10, and Dolby Vision, and resolutions up to 8K. Supported codecs include AVC/H.264 and HEVC/H.265.
On the video ingest and delivery side of things, Encoding.com offers an impressive suite of VOD-focused microservices for both broadcast and OTT delivery, including broad I/O format support, CableLabs packaging, DRM, Dynamic Ad Insertion, Neilsen watermarking, advanced audio, and Automated Quality Control (QC).
A little-known fact is that Encoding.com was an Amazon Startup Challenge finalist in the fall of 2008. You can check out their growth story here. Since then, a lot has happened, and Encoding.com says that it has now processed more than “one trillion API requests and encoded a billion videos.”
I am sure that number has only increased since I started writing this article!
With that introduction to Encoding.com, let’s switch focus to one of the essential parts of a video delivery pipeline – video compression – and look into specific issues surrounding compression efficiency and possible solutions to them.
Improving Compression Efficiency
Video compression is a critical piece in the video delivery pipeline. If you can ensure that your assets are properly compressed, then you can reduce your CDN costs, improve delivery to mobile devices, reduce buffering, and provide a much better QoE for your end-users.
Video compression, as most people recognize, is both an art and a science. It takes a lot of skill to choose the right parameters and settings to ensure that the compressed video looks good, has very few artifacts, and possesses good objective and subjective scores. All of this needs to be done while squeezing every possible bit to reduce the size of the output file.
How do you ensure that the size of the file is the smallest it can be (while maintaining the video quality?)
Well, for one, you could use the latest video codecs, but the time-to-market can be slow due to the lack of support from players and chipsets. Things could change with MPEG’s three new video codecs, but some companies don’t want to change their codec entirely.
Another approach to better compression efficiency is to go the algorithmic route and invent new ways to reduce the file size. The caveat here is that you have to “play” within the codec specification’s confines and do the best you can.
Some well-known (as in tried-and-true) approaches to improve compression efficiency include:
- optimizing the choice of the frame-type (I vs. P. vs B (reference vs. non-reference)).
- dynamically modifying the GOP and mini-GOP lengths to adapt to the scene changes.
- choosing the right block size for block-based codecs such as AVC, HEVC, etc.
- choosing the right reference picture(s).
- optimizing the rate control algorithm.
This is just a fraction of what compression teams are doing currently, and, as you can imagine, it requires a lot of money and effort to roll out customized video codecs.
Generally, in such situations, content providers look to codec vendors that have invested the time and energy into algorithmic improvements and which typically have teams of experienced codec engineers adding improvements each day.
Content Adaptive Encoding
Content-adaptive encoding has become a popular solution in the past few years, with several compression vendors rolling out unique solutions.
In content-adaptive encoding, multiple passes (at least two) are performed over a video during the compression process.
- In the first pass, algorithms gather information about video characteristics such as contrast, noise, scene changes, fades (length, occurrences), and spatial and temporal complexity at a granular level (e.g., frame, scene, or GOP).
- In the second (or subsequent) pass, the video is compressed using the information gathered in the first pass.
Compression efficiency improves with this multiple-pass approach because the technology:
- has “looked into the entire length of the movie” instead of just the next couple of seconds and this allows for better rate control planning.
- importantly, knows the characteristics of each frame of the video and can adapt to it.
Armed with this information, the encoder can adapt its bit allocation more intelligently by spending more bits on complex scenes/frames and saving bits on very simple content.
Having a “lookahead” length equal to the length of the movie helps greatly in bit-allocation planning and making decisions on quantization levels, picture types, insertion points for I-frames and IDRs, etc.
Are all Content Adaptive Encoding solutions the same?
Content Adaptive Encoding comes in several flavors, including:
- per-scene (where the planning is done from scene to scene as determined by a scene-change detection module).
Can we go to a more granular level? Down to the frame-level and perform per-frame encoding? Yes!
Beamr’s CABR Technology
Beamr is an innovator in the field of video compression and has been working on a proprietary method for improving compression dubbed content-adaptive bitrate (CABR) encoding. The company effectively stretches the envelope and performs content-aware encoding on a per-frame level!
Beamr’s CABR technology compresses a single frame at various different levels (coding configurations) and then checks to see which one compressed the video the best within certain quality thresholds. The winning configuration is used to compress that particular frame of video.
To check the video quality at each pass or iteration and determine which configuration is best, Beamr uses a proprietary in-house metric that mimics the HVS (human visual system).
Each frame is scored using this metric. Scores are then compared against a Quality Threshold, and the “winning” coding configuration is used to compress that particular frame. Here’s a visual representation from Beamr’s website.
While this sounds straightforward, there are problems surrounding encoding latency and other computational complexities that arise when repeatedly encoding a frame of video.
So, it’s great to see a company take this approach and solve it for use in practical, large-scale, or high-volume video compression.
Encoding.com and Beamr Come Together
Now for the interesting part!
We talked about Encoding.com’s comprehensive service offering and the innovation that Beamr’s CABR technology brings to VOD encoding and delivery.
Well, Encoding.com is now incorporating Beamr’s CABR technology (Beamr’s 5x HEVC and 4x AVC CABR encoding engines) into its platform, providing access to the best of both worlds!
By compressing and delivering your video more efficiently, this joint collaboration promises to deliver around 25% bitrate reduction for high-action content and close to 50% bitrate reduction for low-complexity content.
The benefits of this collaboration are significant, as it allows you to:
- Fit more channels into constrained broadcast pipes.
- Lower CDN and storage costs through more efficient per-frame content-adaptive video compression.
- Reduce start times and cause less buffering, again owing to more efficient video compression.
- Deliver HD and higher resolution video and ensure a smooth viewing experience to a wide audience that primarily uses mobile connections.
You can get more information on Encoding.com’s datasheet, blogs, or on Beamr’s website.
If you’d like to know more or get in touch with someone who can help you, use Encoding.com’s Contact Form and someone will get in touch with you.
That’s it for this edition of Industry Spotlight. Thank you for reading and please come back again!
Acknowledgement: OTTVerse would like to thank Ben Morrell and Danielle Grivalsky for helping with the research.