Skip to content
What is Latency? - Network Meaning & Reduction Techniques Featured

What is Latency? – Network Meaning & Reduction Techniques

By Author: Team Agora In Developer

What Is Latency and How Can It Be Reduced?

Many people confuse internet speed for bandwidth, but it’s not entirely their fault. Often, internet service providers claim that their connections are as fast as 50 Mbps, or that their speeds are 30% faster than their competitors. However, in reality, your 50 Mbps internet connection has little to do with speed, and more to do with the amount of data you can receive per second.

Simply put, true internet speed comes down to a combination of bandwidth and latency. But, what does latency mean? Read on to find out.

In this blog post, we’ll discuss the definition of latency and how it is different from bandwidth. Plus, we’ll explore ways to reduce latency.

What is Latency?

Latency is the time that elapses between a user action and the subsequent response.

In networking, latency refers precisely to delays that occur within a network, or on the Internet. Practically, it is the time between a user taking some action and the response from the site or app to their action.

Let’s consider an example to better understand the meaning of latency.

Suppose a user clicks a link to a webpage and the browser displays that webpage 300 ms after that click. What does this 300 ms mean? If you guessed that it’s the delay (also known as latency) between the user click and browser response, you’re right!

Latency vs. Bandwidth

You can think of latency as the amount of time it takes for data to transmit from one point to another. In this way, it depends on the physical distance that data must travel through cords, networks etc. to reach its target.

On the other hand, bandwidth is the rate of data transfer for a fixed period of time. As the name implies, bandwidth is the width of a communication band. The wider the communication band, the more data can flow through it.

Irrespective of the amount of data you can send and receive simultaneously, it can only travel as fast as latency allows. Obviously, this means that sites run slower for some users depending on their physical location. Reckoning how to improve this speed for users from all corners of the world is what reducing global latency is all about.

What Affects Latency?

The following are the seven main factors that affect latency in telecom:

  1. Transmission mediums: Mediums like WAN or fiber optic cables all have limitations and can impact latency simply because of their nature. For instance, packets traveling over a T1 line can be expected to experience lower latency than packets traveling over a CAT5 cable.
  2. Packet size: A large packet will take longer to travel round trip than a smaller one.
  3. Propagation delay: Propagation is the amount of time it takes for a packet to travel from one source to another at the speed of light. If each gateway node has to take time to inspect and perhaps alter the header in a packet, such as changing the hop count in the time-to-live (TTL) field, then this will increase latency.
  4. Packet loss and jitter: Latency can also be introduced by a high percentage of packets that fail to reach their destination. It also occurs due to excessive variation in the time it takes for some packets to travel from one system to another.
  5. Routers: Routers take time to analyze the header information of a packet. In some instances they even add extra information. Each hop a packet takes from one router to another increases the latency time.
  6. Signal strength: If the signal is weak and has to be boosted by a repeater, this can introduce latency.
  7. Storage delays: When a packet is stored or accessed, storage delay can be instigated by intermediate devices such as switches and bridges.

How to Reduce Latency

You can reduce latency by using a number of different techniques. Here are several examples:

  • Using HTTP/2: HTTP/2 helps decrease server latency by reducing the number of round trips from the sender to the receiver and with parallelized transfers.
  • Reducing the number of external HTTP requests: This not only applies to images but also to other external resources like CSS or JS files. When you reference information from a server other than your own, you’re making an external HTTP request. This can significantly increase website latency depending on the speed and quality of the 3rd party server.
  • Using a Software Defined Real-Time Network: Agora’s SD-RTN technology works like a traditional CDN in that it is a dedicated network for delivering your content to end users. However, it far exceeds the capabilities of a CDN in that it is designed from the ground up to offer ultra-low latency and the interactive experiences needed for the future of real-time engagement.
  • Using prefetching methods: Prefetching resources doesn’t essentially decrease the amount of latency but it does enhance your site’s perceived performance. This is because latency-intensive processes happen in the background when the user is browsing a specific webpage.
  • Using browser caching: Browser caching allows you to locally cache certain website resources. This helps improve latency times and decreases the number of requests back to the server.

Want to enjoy high-quality, extreme-low-latency live video streaming experiences? Agora offers developer APIs that have extremely low latency voice.

Get started for free today or call us at 408-879-5885 to learn more about our real time messaging solutions that can power your web, mobile, and desktop applications.