Server-side- or Client-side performance testing
We have enough tools and talks about how to test the performance of an application, what we really mean is we know what to do when it comes to server-side code execution; finding out issues and pinning down the problem. Tools like Load Runner, JMeter, VSTS, etc. could help up do this. However, when we are talking about client-side performance we do not know much, or we do not try to optimize much. I think it’s time we take client-side performance testing more seriously.
It’s possible for you to confuse client-side performance directly with RUM performance provided by APM tools, this is not exactly that. Hence the applications like New Relic, Dyna Trace and App Dynamics may not give you the full picture of the client-side performance as many people probably assume.
Let me explain.
Explanation of Navigation API at w3.org provides details into how browsers work. In the picture below you will see different events which help us understand what’s happening inside the browser.
Page load process: traditional websites vs. modern web applications
It was comparatively easier to calculate the page load time by checking window load event (see the below diagram). However, in the age of Angular/React, technologies’ “single page application” works in a different way. Window load events do not correspond to the actual page load. DOM would be manipulated after the window load, there it would be requesting to render triggered again. Content loading may still be happening (XHR Requests), due to these factors it’s difficult to measure real page load complete via any traditional tools or RUM tools. As its difficult to find out when the real page load completed.
Measuring client side performance
Due to these factors when we are measuring client-side performance for Single Page application for optimizations we can split them into 3 areas:
- Initial Page load performance
This is the time browser takes to load the application into the browser, this includes static resources, CSS files, JS files, etc. This performance is affected by the Payload size of all static content, Network Latency, SSL negotiations (if any), caching, etc.
- Server-Side Performance
Yes, server-side performance affects the user perceived performance, without having to explain further. This is the time the APIs take to respond with the data. We can say this is mostly affected by server-side performance e.g.: network latency, server-side business logic, DB performance, payload etc.
- Client side execution performance
Now that we are clear about how the user perceived performance is impacted, lets looks at how we can measure and optimize this performance. We won’t dive into the second point since there are plenty well known tools to help measure the server-side performance. Instead, let’s take a look at the other two.
Initial page load performance
As I have mentioned above, the way we deliver the payload to the browser causes most of the client side performance testing issues here. E.g. uncompressed/non-minified JS, CSS and static contents. Render blocking JS and CSS, non-compressed server-side responses. There are many tools available which can help you with identifying these problems. Below I have listed a few which are free and provided most of the inputs we are looking for:
These will look for bottlenecks in your application such as: render blocking JS, render blocking CSS and heavy resources which can be optimized deferring of unused CSS, JS and critical chains which needs to be optimized.
Client side execution performance
Here, we look how to optimize the performance of the client’s side. How do we check and bench mark these? Combination of tools provided by Chrome Dev Tools is quite powerful to identify issues. Let’s see how:
- Open the web application or a page which you have performance problems with in Chrome latest version (APM tools can suggest which are slow, but, as I said earlier they may not be accurate)
- Now, from Chrome Dev Tools we need to open Performance Tab
- Start Profiling
- Perform the action/navigation you want to optimize
- Stop Profiling
- From the results:
- Expand Network
- Expand Main
- Switch to Bottom Up
- Network longest block shows you the network call which is taking most of the time (to be addressed at API level). This also shows how the requests are fired, you can identify a critical patch and try to optimize this to be as parallel as possible.
- Main shows the program execution details, again look for the largest block here to find the bottleneck. You can start by looking at the API(XHR) calls in network and see the events which are in Main immediately after the response is complete. You may find XHR Load event in Main (these could be targets for optimization).
- Once you select the item in Main, you can look at the Bottom-Up and then drill down to your application method which is taking most of the time. You can jump directly to the code block as well from here. Please see the example below