Broadcasters today are facing the ubiquity of live video. So what is the problem? Ensuring the quality, reliability, and speed of delivery for live video is a challenging task. For much of the last 70 years, this has only been a problem for TV broadcasters trying to get live video of a breaking story to the evening news or coverage of the big match to the fans in living rooms or sports bars.
Yet today, there is 1000x more high-quality video produced and distributed by non-broadcasters than in the entire TV and film industry. With an almost endless list of use cases and industries needing to process video, there needs to be a way to get live video from the source to audiences without the complexity and expense of traditional broadcast equipment.
What is edge computing?
Although not a new technology as such, edge computing has been the subject of a lot of hype in recent years. Edge computing at the video source is a transformative technology that revolutionizes how video is processed. This disruptive technology unlocks hundreds of new innovative use cases and capabilities that augment the broadcast feed such as monetization opportunities and social interactions.
The edge computing for live video concept encapsulates overcoming video workflow issues that suffer from high CAPEX and OPEX costs, inflexible hardware, poor reliability, and an inability to scale. Simply put, edge computing means complementing the cloud and moving the ways in which we capture, process, and distribute video streams as close to the video source as possible. The reduction in distance that data has to travel minimizes transit costs and latency — after all, if speed matters, then so does distance.
Additionally, edge computing for live video platforms has more raw processing power, meaning it can run locally many of the video processing tasks that enhance cloud functionality. By processing data at the edge rather than the cloud, certain video use cases can reduce OPEX costs by as much as 50% and latency from tens of seconds to under two.
What’s the problem?
By 2022, live video is expected to grow 15-fold and reach a 17% share of all internet traffic. With such a large volume of video flowing around the internet across private networks, it’s fair to ask, what’s the problem with the existing video workflows? Although this is a good question, we should remember how the telegraph gave way to the telephone and onto the smartphone.
Looking at video-based use cases in the broadest sense, there are three main issues; the cost and reliability of the equipment, its ability to adapt to all these varied use cases, and the ongoing complexity of delivering large-scale media services. As a way of overcoming some of these issues, end-users and managed media services providers (MMSPs) have turned to cloud-based services to mitigate some of the limitations imposed by either low-end consumer encoders — or high-end and expensive broadcast-centric devices.
In a growing number of use cases, the cloud acts as an intermediary step to process these video streams — and distribute them to their destination. This has many benefits in terms of allowing less powerful equipment on site, although it does not solve the reliability or manageability of these on-site encoders. This is where bringing many of the cloud computing benefits to the edge, i.e., to the source of live video, offers an alternative.
How edge computing will revolutionize live video streaming?
The flexibility of edge computing at the point of video origin enables cost and complexity reduction while improving reliability and scale and unlocking new potential. For instance, a number of broadcasters and sports leagues take advantage of edge computing capabilities to get more feeds into their production workflows without having to spend huge amounts on additional encoding equipment.
Edge computing for live video is not a binary choice between edge or cloud; there are many use cases where it can be deployed to complement cloud workflows by making the journey of video more reliable, efficient, faster, and ultimately better. As a result of only needing to encode once vs twice with the traditional encoder or cloud workflow, broadcasters can benefit from lower cloud costs and a dramatic reduction in latency.
Let’s look at sports broadcasts in particular. We know that negative consumer experiences have tended to be driven by latency as time lag behind the live event is certainly important to viewers — just imagine fans at a stadium tweeting about a great goal 20 seconds before viewers at home have even seen the ball hit the back of the net. By taking a hybrid approach of using an edge computing platform to augment the cloud, media companies enjoy the freedom and flexibility they require. In some cases, we’ve even witnessed an average glass-to-glass latency of 3 seconds.
The future is edge
For the broadcast industry to advance and truly transform live video streaming, it is a necessity that we move workflows to the edge. The future of how we produce, process, and consume video is not set in stone. This is where edge computing at the video source comes in, bringing cutting-edge processing capabilities where video is created and driving a step-change in both the experiences and applications that are delivered. If we are to fully unlock the broadcast market’s potential, we must offer cutting-edge viewing experiences — the only way to do that is by utilizing the power of edge computing at the video source.
Todd Erdley is the founder and president of Videon - the leader in edge computing for video, making live video processing and distribution faster and more efficient - with lower costs. Videon’s innovative video applications handle anything from simple, low latency encoding and streaming to advanced AI-powered use cases.
Todd’s continued vision is behind Videon’s groundbreaking advances in video technologies. Erdley’s leadership has propelled the company to the forefront of video encoding and streaming products that enable prosumers and professionals to create transformative solutions that are rapidly driving and simplifying the movement of media from any source to any screen.