Skip to content
What it takes to build real-time voice and video infrastructure

In this series, WebRTC expert Tsahi Levent-Levi of provides an overview of the essential parts of a real-time voice and video infrastructure—from network to software updates. Check out his informative videos and read how Agora’s platform solves the challenges so you can focus on your innovation.

5.1 Distance and Latency

Watch time:
Category: Chapter 5: Media Servers

Understand latency, its impact on RTE, and different ways of dealing with it to maintain a good real-time experience.

Dive deeper: Read more about low-latency streaming.


When dealing with media servers, we need to look at distance and latency. Here’s what we’re going to do now, we’re going to introduce a notion of distance in real time engagement. We’re also going to understand the impact of latency on our application and the user experience. 

Read the full transcript

I For all intents and purposes, distance equals latency. If we have a person in South America and one in India in which we want to connect both of them, there is a distance between them. The farther away they are from each other, the more latency we should expect from the network. Now, that’s a nice theory, but not no, that’s not always the case. That’s because the actual distance that we have depends on internet routing, and internet routing depends on carrier priorities. How the carriers that the users are using for their networking have decided to route the traffic. Also their priorities, those of the carriers rely on the ownership of the networks that they have, versus the quality that they want to provide to their users, and the cost associated with offering that quality. So, in some cases, we’re going to see physical distances, being quite different than the actual network distances, the latencies that come with them. 

So how do we deal with that exactly? One of the ways to deal with that is to use server assisted routing. Instead of having the participants find each other directly, or connect to a remote server, what we want to do is to offer a solution that the server is the one handling the routing, and not the carrier. So, what we want to achieve is to get the users traffic as fast as possible off the internet. That means connecting him to the closest server that we can find—closest again from a latency perspective. So, we want to do that as close to the edge as possible. Once we do that, we need to route the traffic over our managed network across the servers that we have. We’re going to enforce a certain type of routing over the machines that we own and operate.  

That means that we need to have a lot of machines on the server edge, a lot of servers running on the edge of the network to be able to capture traffic as close as possible to the user. But it brings with it a lot of questions. These questions are mainly around what exactly do the open we optimize in the routing rules. Do we optimize for lower latency? Is that what we want to achieve? We might get lower latency but with higher packet loss. So, do we optimize for packet loss as well? Do you want lower jitter? Remember, jitter is the fluctuation of the network. The less jitter we have, the easier it is going to be for the player to play the data, the media that it received, and the better the quality is going to be? Or do we optimize for higher bitrate?  

We need to factor in all of these characteristics and parameters into the routing algorithms that we’re going to put between our servers. The most common approaches, or the most naive approaches are going to be to just connect the servers to one another and geo-locate the user versus these servers, but we can optimize further than that, and that would give us better performance over time.  

To summarize,, it is our role to shorten the distance and the latency between our users. Distances aren’t geographic distances, but rather network distances and they rely on routing of carriers themselves and the rules that they have. How are we going to plan and run our infrastructure is going to matter a lot to the latencies that the end users are going to have and the quality that we’re going to provide to them. Thank you.