A gauge going full speed
Frontend

Web Performance Limitations (Part 1)

Lesezeit
8 ​​min

Notice:
This post is older than 5 years – the content might be outdated.

When web developers talk about the web today they often discuss topics around web performance optimization (WPO). Nowadays it’s an even more important topic, since we use the browser for almost any type of application with many different devices and different connection types from all over the world. It’s a complex environment where dozens of lines of code get written and executed. Companies like Amazon and eBay have huge decreases in revenue when their site loading times increase. In 2008, Amazon reported that they approximately lose 1% of revenue for every 100 ms increase in loading time, which when we think about the revenues of Amazon in 2015 (107 billion dollar a year) would imply a loss of 1.07 billion dollar a year.

And that’s by far not everything. In a recent talk given by one of the head Opera developers, Bruce Lawson mentioned that more users and devices will hit the WWW: Emerging markets like Africa and India are growing fast. He predicts that we will have roughly 3 billion more users using the WWW in the next 50 years and that they will come from emerging markets. They will not connect to the internet with a high end desktop computer, they will start using the internet with a low budget smartphone, with slow internet connections and low processing capabilities. These markets will eventually be our next customers, and therefore we need to emphasise the need of performance, so that everybody is able to use our web applications no matter their device or internet connection quality!

In this blog series I want to discuss what web performance is, how to look at it from a user-centric perspective and show which kind of optimization techniques we can use to make our web applications faster.

First, let’s start this series by showing the limits of web performance optimization to increase awareness for measurement parameters like seconds (sec) and milliseconds (ms).

Physical limitations

For many developers web performance is all about optimizing the back- and frontend to speed up loading time and to deliver a fast and snappy web application, but if you take a wider look at this topic, you will see that there are underlying theoretical and physical limitations. Knowing these limits is a core value for web developers. Knowing them we can treat performance with a higher value and see whether further optimizations make sense and are going in the right direction.

Ilya Grigorik, one of the Google performance gurus, shows in his book “High Performance Browser Networking“ an interesting table about how the signal latencies in a vacuum and in fiber cables of different lengths are affecting speed: The speed of light is the maximum speed at which all energy and information can travel. The speed of light is fast, really fast and indeed you can travel around the earth with the speed of light in around 133.7 ms with 299,792,458 meters per second. That’s the theoretical limit which Einstein has mentioned in his theory of special relativity, but if we think about web technologies today we always have different factors which limit the maximum speed, so when we use a fiber cable and send a network packet through it we have a roughly 1.4-1.6 times slower connection than theoretically possible. We replicated a table from Ilya’s book using inovex office locations.

Route Distance Time, light in vacuum Time, light in fiber Round-trip time (RTT) in fiber
inovex Karlsruhe to inovex Hamburg 520 km 1,7 ms 2,6 ms 5,2 ms
inovex Karlsruhe to San Francisco 9219 km 30,8 ms 46,2 ms 92,4 ms
inovex Karlsruhe to Sydney 16,539 km 55,2 ms 82,8 ms 165,6 ms
Equatorial circumference 40,075 km 133,7 ms 200 ms 200 ms

Mental context switches

So what is this all about? Studies have shown that a web application should load content in 1000 ms and respond to input in 100 ms, otherwise users will get distracted and perform a mental context switch. We will discuss these suggestions and user centric-performance models like the RAIL paradigm from Google in upcoming articles, but for now just take these time limits and think about the situation when we send network packets over a fiber cable connection from inovex Karlsruhe to a server located in Sydney. The theoretical round trip time (RTT) is around 165 ms but that is not a realistic value. In reality there are a lot more bottlenecks to take into account (e.g we calculated with direct flight distances but cables are not going straight from one place to another). A more realistic value considering this would be 300 ms, which is still a really optimistic value.

Pinging Australia

You can try that for yourself using your internet connection. I just googled Australia and the first hit was the Australian Government website. So I looked up the server location via https://geoiptool.com:

A Screenshot showing the geo-information for www.australia.gov.au

Finally I used the terminal to ping that location several times. The result was an average RTT of 304 ms:

When we now take a closer look at these values and take into account suggestions from user studies mentioned above, we see that to hit these limits we cannot make the packets travel faster, since the finite speed of light limits the theoretical maximum of speed, but we have various other optimization techniques to improve RTT, such as content delivery networks (CDNs), where we ship content to a nearby location, so that the latencies will decrease.

Let’s assume the Australian Government website we want to communicate with uses a service provider to host their website in Hamburg. Instead of sending network packets from inovex Karlsruhe to Sydney, we would send packets from inovex Karlsruhe to a server located in Hamburg. An RTT from inovex Karlsruhe to Hamburg will roughly take 5,2 ms, when we adjust that value considering bottlenecks mentioned above it would roughly take 17 ms, which is still much better than sending packages from Karlsruhe to Sydney.

Most web applications need a lot longer than a hundred-something milliseconds to load. So keep in mind when you talk about improving your application in the range of milliseconds you already have a really fast application, but when your site loads in e.g. 15 seconds you have much room for improvement. Thinking in milliseconds is a good idea since you see that an improvement of 100 ms can matter a lot.

In the upcoming articles we will explore why these limits are relevant, but you still don’t need to worry about them too much. We will even talk about situations when your applications are getting too fast, messing with people’s psychological awareness of time.

Summary

With that in mind let’s finish this article with a summary. Here are the key takeaways:

  • Worldwide a lot more users will gain access to the internet and they will want to use your web application with slow connections and low budget smartphones.
  • We cannot make web applications appear without any delay.
  • We have physical and theoretical limits but we can try to optimize to hit those limits.
  • The better the web performance the more difficult it gets to optimize.
  • Every millisecond counts.
  • Try to look at web performance from a wider perspective: Learn how the foundation of your technology stack works.

To be continued…

Get in touch

Don’t want to get your hands dirty? Our developers have years of experience in developing mobile and web applications. Visit our website for a full portfolio, ask for a quotation at list-blog@inovex.de or call +49 721 619 021-0.

We’re hiring!

Looking for new challenges in web development? Join us as web-frontend or software developer!

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert