Skip to content
Bandwidth vs. Latency: What’s the Difference? featured

What’s the Difference Between Bandwidth and Latency in Real-Time Communication?

By Author: Team Agora In Developer

In the digital age, the quality of your streaming experiences often depends on two key network performance metrics: bandwidth and latency. All digital products that use the internet are subject to latency and bandwidth constraints. Broadband internet being a limiting factor in application performance, it must be balanced with additional network capacity to accommodate the increasing amount of information users send.

To reduce latency in your apps and offer better response time for end users, it is important for you, as a developer, to have an understanding of what these limitations are, as well as the key difference between bandwidth and latency and how they impact performance.

This article covers the following:

  1. What is bandwidth?
  2. How does bandwidth impact real-time communication?
  3. What is latency?
  4. How does latency impact real-time communication?
  5. Bandwidth vs. latency in causing RTC issues
  6. Latency and bandwidth issues: solutions for users
  7. How developers can provide better RTC quality

What is bandwidth?

Bandwidth refers to the maximum capacity of your internet connection to transfer data. It’s expressed in megabits per second, or Mbps, that can pass through a network at any given time. A common mistake many app users make is assuming that bandwidth refers to their internet speed. Internet speed is not the same thing as bandwidth, although it can be a contributor. A wide range of factors like the type of internet service (broadband, cable, satellite, wireless) and the medium data travels (fiber optic cables, WiFi signals) influence a user’s available bandwidth.

Think of network connection as a highway; the cars driving along it are data packets. A five-lane highway would have a bigger bandwidth than a two-lane local road because it can accommodate a greater number of cars. However, having a wide highway will tell you nothing about how fast the cars are going, similar to how bandwidth alone can’t determine the internet speed. It simply tells how much data can be received at a given time.

Bandwidth has two directions. A user’s “down” bandwidth is the capacity for receiving or downloading data, while a user’s “up” bandwidth is the capacity for sending or uploading data. The bandwidth that your users’ network needs will depend on the applications they’re using and the number of concurrent users utilizing them. Data-hungry applications like online gaming and live HD video will consume more bandwidth than low-data apps like email or web browsing static web pages.

How does bandwidth impact real-time communication?

So how does bandwidth affect real-time communication? Bandwidth is crucial for RTC because a low bandwidth will cause quality issues like lag. Also, the more participants in a call, the higher their bandwidth needs.

For instance, a basic one-on-one video call (such as Skype) requires a minimum bandwidth of 1.5 to 2 Mbps down and 2 Mbps up. On the other hand, video teleconferencing apps like Zoom are more data-hungry and need at least 6 Mbps to transmit a quality call.

Bandwidth requirements also go up depending on the video resolution. Typically, for streaming 480p video, a user needs 3 Mbps minimum. 4K video streaming requires a whopping 25 Mbps for the smoothest viewing experience.

However, remember that stated bandwidth of your users’ network only refers to its theoretical maximum capacity. In practice, the actual amount of data they can send and receive is smaller than this. That’s because most applications and protocols add extra bits or perform more processing, which introduces overhead.

This is called “goodput” or “good throughput.” For example, if a user sends a file via HTTP, the data packets get padded in with up to 66 bytes worth of header information. Thus, the actual amount of data users send and receive is smaller than the declared bandwidth.

Generally, the real bandwidth of your users’ network should be no less than approximately 80% of what was advertised. So, if they are on a 100 Mbps plan, their bandwidth must not go lower than 80 Mbps.

What is latency?

Latency pertains to the delay in data transmission over the network. It measures the time it takes for a data packet to travel from one point to another. This delay is measured in milliseconds (ms). Whereas bandwidth refers to the volume of data sent, latency refers to the time it takes for the data to be transmitted. 

If bandwidth is the highway’s capacity, latency is the time it takes for a car (data packet) to travel from point A to point B. 

Due to how networks function, latency cannot be 0 ms. That’s because hardware, physical connections, and protocols will always delay a data packet’s transmission by some measure of time. Packet errors will also require the host to make multiple requests to the sender, which takes time.

Distance plays a role in latency. The farther the data has to travel, the longer the data transmission. For example, in ideal conditions, a 100-mile distance will have a latency of around 5-10 ms. Contrast that to the 40-50 ms delay of two hosts that are 2,000 miles apart.

However, latency only measures the time data is sent to a destination. All internet communication, including RTC applications like teleconferencing, is a two-way street. Thus, the actual delay is double the latency, or what’s called the round-trip time (RTT) or ping time.

How does latency impact real-time communication?

Latency is a crucial metric for real-time video and audio calls. An excessively high delay can cause an audio/video mismatch. In other words, the users can see the speaker talking, but they only hear the words seconds after. This makes a conversation difficult to follow and real-time collaboration nearly impossible.

Generally, 100 ms is the maximum latency that allows for real-time communication at a decent quality. Anything beyond that will cause problems. 1-5 ms is ideal for the best experience possible.

To summarize, the key network latency vs. bandwidth distinction is in their very definition: bandwidth is primarily a measure of data volume. A bigger bandwidth is desirable because it allows more data to be sent at any given time. Latency is primarily a measure of data speed. It tells you how long a data packet travels between two points in a network. Lower latency is the goal because it means smaller delay and smoother communication.

Bandwidth vs. latency in causing RTC issues

Now that both terms have been explained, let’s look into the differences between bandwidth and latency in terms of how they impact the quality of your users’ video communication.

When most people encounter lagging or freezing issues during a video call, they automatically assume it’s a bandwidth problem, but that’s not always the case.

Imagine your users having a high bandwidth — say 150 Mbps. That means they should get 150 megabits of data per second — at least in theory. However, they won’t get anywhere near 150 megabits per second if the data being transmitted has high latency. They may get high-quality video but encounter issues with freezing and stuttering.

Compare this to having a low bandwidth (say 5 Mbps) but low latency. What this means is that your users are going to receive 5 megabits of data per second consistently. They’ll end up with a video that might appear blurry, but they’ll maintain fluency with zero syncing issues.

Although both metrics are important in real-time communication, latency is generally the crucial one of the two. That’s because a high latency will cause freezing and audio/video syncing issues, preventing a video or audio call from coming through in real time.

Bandwidth, at worst, will cause blurry or low-quality video images. While this isn’t ideal, the user can continue talking with the other person using audio, assuming their network meets minimum bandwidth requirements.

This is what separates real-time communication from asynchronous communications like email or text chat. Web browsing, for instance, emphasizes high bandwidth so that users can load pages all at once. In these cases, high latency is almost never noticed.

Thus, if your users are encountering problems with real-time communication applications, finding and fixing latency problems is more worthwhile than bandwidth issues, even though they both can be responsible for low-quality connection. So, what can you recommend to your users?

Latency and bandwidth issues: solutions for users

Detecting the problem cause

  • Measure round-trip time with a ping test
    This command sends an Internet Control Message Protocol (ICMP) request to a specified IP address to verify if it’s online. The time it takes for the ping program to receive the reply from the IP host is the ping time.
  • Assess bandwidth and latency with a speed test tool
    The easiest way to assess bandwidth and latency is to use a speed test tool. Doing so will quickly tell your users their network bandwidth (for both download and upload speeds) and latency. It’s best to run this test multiple times throughout the day and with wired and wireless connections. Doing this will tell your users the average performance of their network, as well as spot the best times to do a video call.

Solving the issue

Once the culprit of a slow connection is known, it can be fixed. However, this is another area with a significant bandwidth/latency difference.

Generally, bandwidth issues are more straightforward to fix. Switching providers or plans is the easiest solution for consistent bandwidth problems. For example, upgrading from a 25 Mbps to a 50 Mbps plan should do the trick. If the issue persists, it might be a chronic issue with your users’ service provider.

Users can also try minimizing usage in their network or avoid scheduling around peak times when multiple people log in. This helps free up more bandwidth for the video call. Again, bandwidth is like a highway, so the fewer cars (data) there are, the better.

Latency, on the other hand, is trickier to resolve. Many causes of latency, such as distance and network congestion along the route, are often outside your users’ control. But there are solutions that they still can try. One is switching from Wi-Fi to Ethernet. That’s because wired internet offers a more stable connection, which can help deliver consistently lower latency.

On the other hand, wireless internet is prone to signal disruptions that could slow down data transmission. If your users have cable internet, they can swap it for a connection with lower latency, such as fiber optic.

Simply put, neither bandwidth nor latency alone determines real-time communication quality. Instead, it’s the bandwidth and latency relationship, AKA throughput, that affects the video call experience of your users. High throughput means they’re receiving a large amount of data (thanks to high bandwidth) rapidly (due to low latency).

Ultimately, it’s not a matter of bandwidth vs. latency vs. speed. It’s about optimizing all these metrics simultaneously to achieve consistently seamless and high-quality video and audio calls.

How developers can provide better RTC quality

In many cases, bandwidth and latency issues can be resolved or compensated with the right audio/video streaming platform that will reduce the need for your users to troubleshoot connection problems themselves. The Agora platform will give you just the edge you need.

Agora Software Defined Real-Time Network

Our robust global Agora Software Defined Real-Time Network (SD-RTN) of over 250 data centers uses intelligent dynamic routing algorithms. All video and audio is processed through this SD-RTN to achieve low latency and high availability. SD-RTN covers 200+ countries so that you can achieve fast connection at a global scale.

Pre-Call Tests

Before joining a call, Agora SDK allows users to run a network probe test and an echo test. The network probe test checks the last-mile network quality and returns statistics such as round-trip latency, packet loss rate, and network bandwidth. The echo test captures user audio and then plays it back allowing the user to evaluate their network connection and hardware.

Audio and Video Settings

Selecting the right audio and video settings helps deliver the best quality of connection to your users depending on their bandwidth and latency. Agora allows you to choose from a list of preconfigured profiles that regulate sample rate, bitrate, encoding scheme, and the number of channels for your audio, as well as dimensions, framerate, bitrate, orientation mode, and mirror mode for your video.

In-Call Quality Statistics

To ensure a consistently high quality of connection, Agora provides means to monitor the network, local and remote audio, local and remote video quality, as well as video and audio states.

Other Features to Manage Audio and Video Quality

Among other things, Agora SDK offers a dual stream mode, video stream fallback, echo cancellation, and connection state monitoring. Read more about managing call quality with Agora SDK.

Never worry about bandwidth vs. speed vs. latency issues again. With 99.99% uptime, dynamic scalability, and easy implementation, Agora is the best tool for your next real-time communication platform.