Multi-CDN: How Does It Work, Strategies, and Benefits

CDNs are a key part of any streaming service’s infrastructure. They store content on servers worldwide and make it available to users in near-real-time, regardless of where they are located. But, as the load and complexities in delivering video increases, more and more streaming providers are moving over to a Multi-CDN architecture.

But what is a Multi-CDN?

Well, as you know, a CDN typically caches web pages from your own server at many points worldwide to ensure fast delivery of data to end-users. With a multi-CDN architecture, however, you can cache content across multiple CDNs with overlapping geographic coverage. This provides greater redundancy and performance benefits by leveraging different providers’ strengths and minimizing their weaknesses through intelligent load balancing algorithms.

In this post, we’ll explore how a multi-CDN architecture works and the benefits of a multi-CDN architecture.

Let’s get going.

What is a CDN or Content Delivery Network?

A CDN (Content Delivery Network) uses servers spread out across the globe to cache/store files, such as images, videos, code libraries, and so forth. When a user requests a media object from your website/service, the request first goes to the CDN and is served from there. If the CDN has not cached the object, the request is forwarded to your origin/webserver to be fulfilled.

multi-CDN switching strategies
(Left) Single server distribution
(Right) CDN scheme of distribution [Image credit: Wikipedia]

The closer your visitors are to one of those servers, the faster they will access your content. For more information on the “why” behind CDNs, please read our “What is a CDN” introductory blog post. Also, take a look at an explanation of the Thundering Herds problem & Request Collapsing in CDNs to understand more about the inner workings of CDNs.

Now, that you know what is a CDN, let’s learn more about multiple CDNs or multi-CDN as they are popularly referred to as.

Why You Should use Multiple CDNs

The need for a multi-CDN strategy arises when an organization’s traffic load increases beyond their current capacity limitations on one CDN provider or if they are looking to distribute their content geographically across providers strategically.

This is because of the many advantages to hosting content from different CDN providers – geographical redundancy, security, increased performance, and cost savings (multi-CDN can help reduce bandwidth costs by balancing out loads) and taking advantage of the pricing differences between different CDN vendors for different situations, time-slots, etc.

First off, let’s take a quick look at what a multi-CDN architecture is and then dive into the implementation strategies.

What is a Multi-CDN Architecture?

In a multi-CDN architecture, a website/streaming service’s content (images, video files, etc.) is cached across multiple CDN providers in different geographic regions.

With the help of intelligent load balancing algorithms and data collected at different points of the delivery pipeline, the incoming traffic from the video players (clients) is distributed across these multiple CDN providers, which provides greater redundancy and performance benefits.

Data is generally collected via QoE/QoS analytics providers with code running on every single video player that a service uses or from proprietary ways of measuring network performance (time taken to download a dummy file, e.g.). All of this data is used by a central rules engine or server to decide which of the multiple CDNs should serve the requests arising from a certain region.

Next, let’s take a look at how multi-CDN switching is implemented.

Different Ways of Implementing Multi-CDN

There are different ways of implementing multi-CDN such as round-robin, DNS switching, manifest-rewrites, HTTP redirects, and client-side switching. Let’s look at how these techniques work.

Before we study the implementation of multi-CDN switching, let’s first take a look at how the decisions to switch CDNs are taken. There are primarily two ways of deciding which CDN to use – static rules-based switching or dynamic rules-based switching.

Static Rules-based Switching vs. Dynamic Switching

Static Rules-based switching is a naïve form of CDN switching where a rules-engine is programmed with simple instructions like “if CDN-A fails, then divert traffic to CDN-B.” This does not accommodate a lot of intelligence and needs to be monitored tightly to ensure that situations not governed by the rules do not crash the system.

In Dynamic rules-based switching, the decision to switch CDNs is based on continuously collected from the players, CDNs themselves, and other business rules. As these factors change dynamically during the day, the decisions change adaptively.

While the above applies to how the decisions are made, let’s now look at the actual implementation of the switching mechanisms, starting with the simple DNS-based switching.

DNS-based CDN Switching

The idea behind DNS switching involves directing all content requests through a domain name that handles traffic distribution in real-time and on the fly. So, for example, if all the traffic from a domain needs to be redirected from CDN-A to CDN-B, then DNS is pointed to CDN-B – that’s all.


Though this technique looks simple, it is not very efficient because DNS propagation is time-consuming and takes several minutes to propagate through the internet. In the ensuing time, there might be many requests from end-users to content servers on the wrong CDN, leading to performance degradation at best or service outages for all requests if not handled carefully.

HTTP Redirect-based Multi-CDN Switching (Server-Side)

In this method, when the system wants to divert traffic from one CDN to another, the load balancing server will return an HTTP 302 (Found) response to the end-users for every request made. This informs the player that the content is not to be accessed at that requested CDN and should use the alternate CDN indicated in the response header (the 302 status code indicates that the resource requested has been temporarily moved to the URL given in the Location header).

Thus, HTTP Redirects can respond faster to changes in the network and CDN conditions when compared to DNS switching that is time-consuming and not a precise form of switching.

But, what are some of the drawbacks of the HTTP redirect-based approach to Multi-CDN routing?

  • Using a load-balancer server introduces a new point-of-failure into the system. Every request has to come to the URL router (rules engine), and if that server goes down, you have a problem on your hands. To mitigate this, you might have to balance the load on the load-balancer or ensure that it has enough capacity to handle traffic spikes.
  • You are increasing the round-trip time, albeit marginally. This is because every request has to go to the load-balancer, then back to the player, then to the new CDN, and then back to the player.
Related:  Livestream to Twitch using OBS Studio in 4 Easy Steps

However, teams can minimize these problems by limiting the CDN resolution/switching to the first request of every playback session. In this case, the RTT is negligible in the overall scheme of things. Additionally, the load on the switching server is also reduced.

What we just read about is a method where the intelligence is on the server-sider, but, what happens if you want to switch that to the player?

Client-Side Midstream Switching

In server-based techniques, a central server is responsible for diverting traffic to the correct CDN. However, in client-side switching, the switching mechanism is moved into the player. Due to the nature of HTTP-based streaming and the use of independently-decodable segments in HLS & DASH, players can independently fetch every segment from a different CDN (the possibility exists). The player also has the “latest” information about the network conditions, latency, and other factors that make it appealing as a decision-making node.

Every request is examined at the player itself, and a CDN is chosen to serve that request. This requires a simple manipulation of the request URL, which a service can do at the player itself.

As for the decision-making, you can either,

  • embed the rules engine on the player, or,
  • take an API-based approach where the player pings a rules-engine to get the latest recommended CDN and uses it in the switching step.

While all of this sounds great, there are downsides to client-side CDN switching such as –

  • You need additional intelligence to be embedded into the player, and it has to be programmed to communicate with an external server that maintains all the rules and switching factors.
  • You need to implement this for every player used by the streaming service, which is a tough ask! Imagine writing, testing, and maintaining SDKs for HTML5, Android, iOS, Roku, Smart TVs, Xbox, etc.
  • Also, changing the stream URL in the player requires access to the player’s source code (possible in open-source players) or APIs in closed-source players. In platforms where this level of code support is unavailable, you won’t be able to implement mid-stream switching on the player-side.

Manifest Rewrite On-the-fly

Another technique for start-of-the-session CDN-switching is rewriting the manifest on the fly. In this method, a decision server rewrites the manifests to point to a different CDN based on switching rules and factors. This requires the manifest to be loaded (or reloaded) for the change to take place. While this appears to be a simple strategy, it requires a server set up to rewrite and serve manifests after being re-written (without being cached, i.e.). A variation of this service also records a list of CDNs in the manifest that the player can switch to in case one of the CDNs fail.

Though this technique appears easy, there are problems with how the URLs are specified in an HLS playlist vs. a DASH manifest, how often a player requests a new manifest in Live vs. VOD streaming, manifest-rewriter itself being a new point-of-failure.

With an understanding of the different techniques to implement multi-CDN switching, you must have realized by now that there is no one-size-fits-all. Instead, there are different strategies that companies can use based on their needs, infrastructure capabilities and design, budget, and scale.

Next, let’s move on to learning about some of the factors that are used in multi-CDN load balancing decisions.

What Factors Are Used in Multi-CDN Load Balancing?

As we’ve learned, a multi-CDN strategy can be very useful in balancing the load and implementing certain important rules while using multiple CDNs. But, what are the factors that drive Multi-CDN load balancing? Let’s take a look now.

  • QoE / QoS Analytics from the player is used to understand the performance and user experience for different devices, assets, geographies and is aggregated at different levels for decision-making.
  • Latency: Latency refers back to how long it takes for a packet containing data to travel from the sender’s location all the way across the network and reach the sender. If a CDN’s latency or response time is prolonged, then it can be used as a factor to switch traffic away from that CDN. Again, this information is usually obtained from the players.
  • CDN Costs or Commits: Cost Price per GB Delivered and Commit-based pricing are important factors in multi-CDN decisioning. Providers can choose to divert traffic from one CDN to another to satisfy commits or take advantage of them.

These are only a few factors or data points that streaming providers can feed to a multi-CDN rules engine. I am sure companies have their own secret sauce and data points that they can use to fine-tune the decision-making such as the device, ISP, geographical location, time-of-the-day, etc.

But, having done all this, what is the benefit of implementing multi-CDN switching to a streaming provider?

Benefits of Multi-CDN Usage for Video Streaming

Some of the tangible benefits of a multi-CDN architecture are :-

  • Improved performance (reduced latency) by caching content across multiple CDNs providers. This, in turn, leads to higher customer satisfaction when their videos stream satisfactorily with low latencies and reduced buffering.
  • Enables broader geographic coverage by allowing users in one region to access content on another part of the world’s edge server.
  • Useful in avoiding vendor lock-in.
  • Greater levels of security because you can switch all your traffic to another CDN if one of them is under attack (DDoS, perhaps!).
  • Redundancy is a direct benefit and consequence of a multi-CDN strategy.
  • The ability to distribute your budget over multiple CDNs and take advantage of peak/off-peak pricing from multiple vendors.


I hope this article on multi-CDNs was useful to you. There are so many pros & cons to each of the methods we discussed and it’s up to the streaming providers and vendors to figure out what works for them/their clients.

If you are a streaming provider or a multi-CDN vendor and would like to add to this article or publish a guest post, please get in touch with me at [email protected], and we’ll get your thoughts published.

Thanks for reading and happy streaming!

krishna rao vijayanagar
Krishna Rao Vijayanagar

I’m Dr. Krishna Rao Vijayanagar, and I have worked on Video Compression (AVC, HEVC, MultiView Plus Depth), ABR streaming, and Video Analytics (QoE, Content & Audience, and Ad) for several years.

I hope to use my experience and love for video streaming to bring you information and insights into the OTT universe.

Be the first to comment

Leave a Reply

Your email address will not be published.