Skip to content
What is Low Latency Featured

What Is Low Latency?

By Author: Team Agora In Developer

If you are learning about video, especially interactive live streaming, you’ve no doubt encountered the term “low-latency streaming.” In this article, I’m going to explain what low latency is, why it matters—and more importantly, when it matters. It is important to understand that there is a considerable difference between on-demand streaming of one-way content (like movies) and the support of real-time interaction between multiple parties—where everyone must be in sync. In the world of real-time communication (RTC), it is all about the minimization of streaming delay.

Low-Latency Video Streaming Explained

In the context of streaming video, latency is simply a measure of the delay, or lag, between when an event is captured and when the person on the other end sees it. This delay is a natural byproduct of the transmission chain. Each step (camera capture, encoding, network transmission, decoding, etc,) takes time. In most cases we’re talking about fractions of a second—but they all add up. Have you ever heard your neighbors cheering a winning goal that hasn’t happened yet on your screen? You are likely watching on different networks (satellite vs cable for example) and one has lower latency than the other. So why is latency such a big discussion point? Because low-latency video streaming comes at a cost—in both video quality and dollars.

What is Low Latency screenshot 1

Each step of the transmission chain takes time and contributes to latency

When Should You Use Low-Latency Streaming?

You might be surprised at first, to learn that the highest quality video you watch over the internet is far from being low-delay streaming! According to AWS, which enables both Netflix and Amazon Prime, today’s video streaming delay standard is 10 seconds or less. This might not seem like much—until you consider that the latency tolerated in interactive streaming is a fraction of that! Netflix is able to trade time for quality. Because movies are not live/real-time experiences, there is no need for low-delay streaming and so they can take advantage of buffering to deliver a high-resolution product. Let’s go back to the example of the televised sporting event. If you could not hear your neighbors, would you care that you were seeing everything 10 seconds after it happened? No. In fact you would probably not know that there was any delay at all. Not every situation requires no delay streaming.

Now think about a service that allows fans of a particular team, situated all over the globe, to interact with each other while they watch. 10 seconds of delay would be a complete disaster. Think about how difficult it is when just one member of a video conference is experiencing noticeable delay. It simply wouldn’t work. Depending on the scenario, real-time interaction between two or more parties requires latency in the 400ms-2000ms range). Here are some examples:

Use CaseTypical Latency/Delay
Video on demand (movies, Netflix, etc)~10 to 60 second delay
Live broadcast, no interaction~6 to 18 second delay
One-to-many live stream with light audience interaction (click here, etc)2 to 3 second delay
One-to-many live stream with heavy interaction
(host interacting with audience in real-time)
<400ms delay

If you have worked with images, you understand the correlation between quality and file size. A high-resolution photograph requires more bandwidth and the same is true of high-quality audio and video. When the size of the stream exceeds available bandwidth, things begin to slow down.

In real-time interactive streaming, where latency is the driving factor, you often have no choice but to reduce the bandwidth requirement and/or optimize the network. This is why Zoom isn’t able to deliver the same video quality as Netflix. Not all situations demand the same level of latency—or quality. Ideally, your interactive real-time streaming solution will afford some flexibility in selecting the most appropriate balance, whether you need live streaming without delay or if a small delay is okay given the use case and network conditions. And sometimes it is useful to be able to customize the stream based on different roles or requirements of individual participants.

Achieving Ultra-Low Latency

With no control over the devices (phones, tablets, etc.) being used, there are really only two places where we can impact latency. One is encoding and the other is the network.

Encoding

Encoding is the process of converting raw video into digital format suitable for transmission. This is the job of the codec (encoder/decoder) and in streaming, there will be one for the video and another for the accompanying audio. On the video side, this is really all about compressing the raw video down to a size suitable for the internet. A lot of compression can be achieved by removing things that fall outside of human perceptive range. Beyond this though, it is about making tradeoffs—between perceptible quality, bandwidth availability, and latency requirements. Different codecs have different strengths and weaknesses and it is important to have the right one for the job at hand.

Network

There are two facets to the network side of the equation. One is packet loss concealment (also a codec function) and the other is network architecture and routing.

PACKET LOSS CONCEALMENT (PLC)

Information sent across switched networks (like the internet) is broken down into packets. Ideally packets are received complete and in the order sent, but inevitably this is not always the case. There are several well-established methods for dealing with such transmission damage, but the catch is that they all depend on buffering (time/latency). With most of the data sent across the internet, this doesn’t matter. Take for example an email message where a few extra seconds of delivery time will never be noticed. With streaming media however, it is important that the packet loss concealment (PLC) arsenal include low latency options—especially with real-time, interactive streaming. Again, different situations will require different strategies in order to strike the appropriate balance between perceptible quality and latency and it is important to have that flexibility.

NETWORK ARCHITECTURE AND ROUTING

In places where the public internet is generally robust, network structure and routing might not be that big of a deal, but when supporting global real-time interaction it is of paramount concern. Why? Because all pathways are not created equal and routing on the public internet is determined based on business agreements between internet service providers (ISPs) rather than efficiency. It is very difficult to consistently achieve low-latency connections to all participants without intervention of some sort. Because of this, the only viable approach to the global delivery of real-time, interactive, steaming is through employment of a scalable managed network.

The bottom line is that low-latency streaming is not a simple task—especially if you plan to build from scratch using WebRTC. There is a solution though—and that is to rely on an established real-time engagement (RTE) platform to do the heavy lifting. Such platforms have significant resources and expertise dedicated to overcoming latency and the other, often evolving, challenges of real-time communication. Working with an RTE platform allows you to remain focused on building your business, while the platform ensures cutting-edge real-time engagement experiences for your customers.

Latency Summarized

Let’s quickly recap the issue of latency as it relates to supporting live/real-time interaction:

Benefits of Low-Latency Streaming

  • Supports the best possible customer experience
  • Is an absolute requirement (in some degree) of RTC

Drawbacks of Low-Latency Streaming

  • Requires a more more sophisticated approach to encoding and PLC
  • Benefits from from a managed network
  • Higher cost

Agora’s Approach to Low-Latency Streaming

Agora has taken a somewhat unique approach to addressing latency requirements. We have developed our own codecs specifically designed to meet the challenges of real-time audio and video—including high quality to bitrate ratios and a hybrid approach to packet loss concealment (PLC). More importantly, we recognize that not every situation requires ultra-low latency video streaming. We offer different levels of service and the technology necessary to assess requirements and make adjustments on the fly. We don’t believe that you should be paying for any lower latency than you need.

Agora has made a considerable investment in building and optimizing a global network specifically designed for low-latency, real-time video, audio, and messaging . We have established more than 250 of our own, strategically located, data centers, and we have co-located servers with every major ISP. We connect these resources with our own, software defined, real-time-network (SD-RTN). In most situations, we are able to control all but the first and last mile. This gives us the ability to optimize routing based on real-time conditions and because of this, we are able to consistently deliver an average of 400ms latency worldwide.

Next Steps

If you want to learn more about how to stream without delay, check out these resources: