How We Went from 46 to 99 Performance Score to Improve Our Website Speed

In 2018, Google announced that page speed from then on would be a ranking factor for both desktop and mobile search results. But it’s not only ranking that is influenced by page speed. According to research, a 3-second page loading time means a 13% bou…


This content originally appeared on DEV Community and was authored by Rodion

In 2018, Google announced that page speed from then on would be a ranking factor for both desktop and mobile search results. But it's not only ranking that is influenced by page speed. According to research, a 3-second page loading time means a 13% bounce rate, so the page speed doesn't only impact the search results but the user experience as well.

So what do you do if your website is slow? Fixing website performance is not easy, which is why many companies do only minor optimizations just to keep their websites somewhere in search results. Our Brocoders's website ended up in the same situation. But unlike other businesses out there, we decided to eliminate every performance issue we had.

Keep reading to find out how we went from a 43 performance score to 99 and why (apart from being very proud of ourselves) having one of the fastest websites in the industry is worth the effort.

Our starting point: what was the problem and why we had to solve it

The old Brocoders's site, live since 2011, served us well and we loved it very much as it was our first child. However, after several years we felt it was time for a change. That’s why in 2020 we decided to invest in a new website for our company.
the old Brocoders's website view
This is what the old Brocoders's website looked like. Forever in our hearts.

By rebuilding our website, we aimed to achieve three major goals:

  1. Increase conversion rates through faster page loads, better UX, and higher-quality content.
  2. Improve SEO and get better visibility in search engines.
  3. Improve our brand image by introducing new portfolio projects and client reviews.

These goals guided us in deciding what features we had to add. Here is the essential functionality we came up with to achieve our goals:

  • New highly flexible web pages with a great look and feel to demonstrate our services, industries, technologies, and case studies.
  • Updated blog post pages with a search option, tags, and categories.
  • 'Get in touch' and other forms to easily contact us.
  • Numerous third-party integrations: Google Tag Manager, Cookiebot, and Hubspot to name a few.

All in all, our goal was to build a secure static website. That’s why we chose GatsbyJS, a React framework that perfectly aligns with this goal.

Why Gatsby, you ask? The secret sauce behind @GatsbyJS is its ability to transform web development into a seamless and fast experience, weaving together pre-built HTML, CSS, and JavaScript during compilation. Developed with React, GatsbyJS also allowed us to create our own templates and components, using extensions from the React ecosystem.

Another thing we had to care about for the new website was the API backend. Our choice fell on @Strapi, an open-source content management system. Not only did it allow us to customize our data schemas and extend API functionality to manage different types of content like text, images, and videos, but also save us a great amount of time and money – with its user-friendly admin panel and additional features we didn't have to build it all from scratch.

Finally, for server requests, we used GraphQL. Perfect for collaboration with GatsbyJS, it lets us define the query structure and specify only the necessary fields and relationships, avoiding excessive data, which in turn leads to faster page loading time.
combination of technologies at our site
The combination of technologies allowed us to achieve the perfect website stability and security

Once done, the new website was there, gratifying us with higher rankings and renewed design. At first…But then, things gradually went downhill. After adding new pages, making numerous edits, and integrating third-party software, the codebase had become so heavy that our performance scores on Google's Lighthouse, a tool that analyzes web page speed, dropped drastically.
Low lighthouse scores before we started
Lighthouse scores that our website had before we started working on improvements

With such a low score, Google wouldn’t rank us high. And even if it would, users wouldn’t wait until it loads.

Ericsson Research found that the level of stress from waiting for pages to load on mobile can be compared to watching a horror movie. When talking about performance, every second counts. We knew the pain, so we started improving our site’s performance right away.

Improving website performance step by step

To achieve a significant performance boost, we analyzed the main website issues and started working on solving each of them. Here’s how we overcame the main challenges.

1. Accelerating network connections establishment by 2X

GMetrics, a tool that shows website loading behavior, helped us identify resources from external domains that caused the longest connection times.
a diagram of downloading time
In purple, you see the time it took for a Domain Name Server to receive the request for a domain name's IP address, and in light green, you see server connection time. They took too long, so we accelerated them by reducing server connection time.

To speed up our website, we made a few technical optimizations. First, we used Resource Hints, a method involving HTML attributes. These hints help the browser proactively establish connections before making requests for resources from the server. For resources from external domains, we incorporated DNS-prefetch and pre-connect hints that instruct the browser in advance to set up connections with specified domains. Consequently, when the browser needs resources from external domains, they load quicker since the initial setup is already done.

For internal files like scripts and stylesheets, we applied "preloading" and "prefetch" hints. These hints ensure that when users move between pages on our site, the needed resources are loaded from their local cache, avoiding the need to fetch them from a distant server and making the process faster.

These optimizations helped us shorten *Largest Contentful Paint from 2,9 seconds to 0,8 and *First Contentful Paint from 1 second to 0,6, allowing the browser to load resources as quickly as possible.

*LCP, a Core Web Vitals metric that measures the time from when the user starts loading the page until the largest image appears in the viewport.

*FCP, the time from when the user first accesses the page to when the content is rendered on the screen.

2. Achieving better font loading with fast text rendering

On our website, we use both custom and open-source fonts. We host the open-source font "Manrope" from Google using the Fontsource collection as this method is the fastest way to access a font.

However, there was a delay in loading the font package, even though the text was already retrieved from the HTML file. To avoid this disruption between text and font loading, our initial strategy was to show an invisible fallback font until our intended web font fully loads. It worked, but such an approach made the LCP metric longer, which is why we decided to change the strategy and opt for fast text rendering.

Here is how it works. First, the browser finds an available font within the declared font family and uses it to render the text. Then, after loading the original font, the browser seamlessly swaps the first font with the required one. It’s the fastest way to load fonts but this approach poses risks of unexpected text shifts because of changes in font size. To address layout shifts and improve the Cumulative Layout Shift (CLS, a metric that measures how much the elements on a webpage move around unexpectedly as it loads), we applied CSS Fonts Module Level 5, a set of specifications that includes descriptors like size-adjust, descent-override, and line-gap-override, that helped us avoid shifts in page content.
a diagram of downloading time with fonts approach
CSS Fonts Module Level 5 worked perfectly and now fonts change without any layout shift

As a result, three metrics – First Contentful Paint, Largest Contentful Paint, and Cumulative Layout Shift improved. In particular, CLS was reduced from 0,143 to 0 seconds.

3. Improving the loading of images and video

Another performance issue we had was enormous network payloads. Site testing revealed that we had to optimize the loading of images and videos, reducing their size without losing the quality.
a diagram of all types of files
Yellow Lab Tools testing showed us that among all types of files videos and images overweighted our website the most

On the website, we use two types of images – vector images (in Scalable Vector Graphics (SVG) format) and raster images (in PNG and JPG formats). For each type we applied different optimization approaches.

3.1 Optimization of vector images

Vector images are usually lightweight but the problem was that each image required a new HTTP request to load, which slowed down the performance. To address this, we decided to adopt inline SVG, a method that allows embedding the image directly into HTML. To achieve this, we adopted the 'gatsby-plugin-react-svg', a Gatsby framework plugin that allowed us to abstract the process.

3.2 Optimization of raster images

To accelerate the loading of PNG and JPG images, we converted them to the modern .webP format that ensures excellent compression with minimal quality loss. Similarly, we converted videos from .mp4 to .webM, ensuring more efficient compression and quality. Both .webP and .webM became our primary formats, but the old browser versions do not support them. To exclude this problem, we also left fallbacks to .png, .jpg, and .mp4 formats for browsers that don’t support .webP and .webM.

Next up, we had to optimize image display for different devices. Mobile, tablets, laptops, or 4K monitors require images of different dimensions, so uploading the same image for both phones and 4K monitors only made our website longer to load. That’s why we employed adaptive graphics. We included different image options in the code, letting the browser pick the best one based on things like window size, screen resolution, network speed, and other factors. Using packages like 'gatsby-background-image' and 'gatsby-image', we created various image variants for different devices. In the Network tab screenshot below, you see the “Toggle device toolbar” where we switched between page display modes.
a device toolbar
This is where we toggled a device toolbar, optimizing image loading for different devices
all types of files afrer
Here you can see the same file with reduced size optimized for different devices

3.3 Lazy loading for behind-the-scenes images

Finally, we leveraged lazy loading with a blurred image effect to ensure images load only when the user sees them, optimizing the experience for users with limited data plans or slower internet connections.
how the lazy loading of images works
This is how the lazy loading of images works

The results were worth all the sweat – we significantly reduced the weight of the images on the website without sacrificing quality.

a diagram from Yellow Lab Tools
After improvements, Yellow Lab Tools shows that the video – not surprisingly – still takes a larger portion of the page's weight but the weight of images has significantly decreased (purple section)

The optimizations greatly improved user experience and metrics like CLS and LCP. Thanks to using image optimization plugins exclusively from the Gatsby ecosystem, we made it easier to optimize performance.

4. Optimizing caching

We have to confess that caching, a technique where the web server retains a copy of the web page, was initially overlooked during the website development. It was truly a missed opportunity because effective caching would help us speed up the website and reduce server load. We decided to catch up with cache optimization but faced several specific challenges.

Our goal was to fasten the loading of resources by keeping them saved for future visits instead of downloading them every time. To achieve this, we employed the 'cache-control' attribute in the HTTP header and set the time for how long the file should stay in the cache. But then another issue appeared. When we made updates to the website content or design, the changes wouldn't show up immediately because the browser kept using the old saved copy.

How did we solve it? We added a hash to the file names, which updates with every file edit. This way, we could keep files in the cache for a long time but still make changes easily. As a result, the First Contentful Paint (FCP) metric went from high to medium.

Now, we're considering another type of caching called browser caching. Unlike server caching, which requires a constant server connection and uses bandwidth to load the response, browser caching allows users to access the webpage without a network connection. But it also has its limitations – if the user's device is running out of storage space, the browser might delete older stuff to make room for new things.
how caching work
For you to get a better idea, here is a comparison of how server-side caching and client-side caching work

5. Eliminating large JavaScript bundles

Bundles are simply collections of files, usually JavaScript, CSS, and other assets, grouped in a single file for more efficient delivery. As our website grew in complexity, the size of our bundles continued to expand, overweighting the website. It was high time to identify problem areas and get rid of them.

There are some handy tools for identifying and addressing problematic bundles. One of them, Bundlephobia, gives insights into how much an NPM package contributes to bundle size, helping avoid too large collections of files. Import Cost, a VSCode Extension, calculates the 'cost' of imported packages, helping to make informed decisions. As part of our optimization strategy, we've swapped out hefty JS libraries, such as replacing the widely-used 'classnames' package with the more efficient 'clsx', a faster and smaller drop-in replacement tailored to our website needs.

Then, with the Webpack Bundle Analyzer plugin, we discovered problematic areas in bundles.
a Bundle Analyzer diagram before
Bundle Analyzer highlighted our biggest bundles – client-location and map-with-flashing-dots

To break down these piles of files, we split big bundles into smaller parts using code-splitting and lazy-loading. Webpack's built-in code-splitting feature allowed us to transform the import command into a function that points to a file path instead of directly importing files. This creates a promise, which is like a commitment that the file will be loaded. When a similar structure appears in the code, this promise is fulfilled, and the file is uploaded.

For non-critical view, HTML, and JS, we used dynamic imports which let us reduce the initial size of the webpage. Dynamic here means that the website decides whether to load additional files based on specific conditions, making sure it doesn't disrupt the user experience.
a Bundle Analyzer diagram after
Here’s the result of our optimizations: a streamlined loading with no isolated large bundles

After splitting large bundles into smaller parts, we successfully lightened the page and eliminated all large file collections.

6. Minimizing HTML and CSS with code compression

Code compression was a game-changer for us. By shrinking the code and removing unnecessary white spaces, we achieved smaller file sizes, lightning-fast downloads, and reduced bandwidth consumption. Now our server delivers website files in a gzip format, making it faster and significantly improving important metrics like FCP and FID (First Input Delay, a metric that quantifies the delay between the first click and the browser's response) from low to medium performance levels.

7. Code quality and memory leaks

Finally, we reviewed our code and discovered some sneaky memory leaks. The problem was that objects were still in memory even when they were no longer needed, resulting in a cluttered space.

To fix this, we applied two methods. If the event listener (a mechanism that awaits specific occurrences, such as user interactions or system events, and responds accordingly) is only needed once, we use the {only: true} parameter. It ensures that after the listener is triggered, it's automatically removed, preventing any memory issues. As for the second method, we made sure to explicitly remove event listeners using the removeEventListener() function. We did this before removing the element or when the listeners were no longer needed. This ensured a clean disconnection between elements and functions, avoiding memory leaks.

Another thing related to addEventListener is using the { passive: true } parameter. We adopted it for scrolling, avoiding interface hiccups, and ensuring a smoother user experience.

useEffect(() => {
setScrolled(document.documentElement.scrollTop > 50);
window.addEventListener('scroll', handleScroll, { passive: true });
return () => {
document.body.style.overflowY = 'scroll';
window.removeEventListener('scroll', handleScroll);
};
}, []);

Before and after: how performance boost impacted our website

Truth be told, our starting point was far from good – a website wrestling with sluggish load times and an overloaded server that drastically impacted the user experience. But armed with insights and optimization techniques, we did our best and received impressive results. Were we happy with them? Absolutely.
a screenshot on Lighthouse metrics
PageSpeed Insight handed us a shiny report card, showcasing significant improvements
a screenshot on Core Web Vitals metrics
The testing of the 28-day performance showed impressive results – all metrics were improved several times compared with the starting point

All these improvements wouldn’t be possible without hours and days of dedicated work, but there was something else that greatly contributed to our success – the right tools. Some instruments saved us a lot of time during the work and we can’t help but mention them.

Tools for performance optimization: PageSpeed Insights and Lighthouse

In performance analysis and improvement, two Google instruments stand out: PageSpeed Insights and Lighthouse. These tools analyze various aspects of a web page to provide insights into its speed, user experience, and overall performance. Here are the metrics that they both consider:

  • Largest Contentful Paint (LCP) - measures the time it takes for the largest content element (such as an image or text block) in the viewport to become visible to the user.
  • First Contentful Paint (FCP) - measures the time it takes for the browser to render the first piece of content, such as text or an image.
  • First Input Delay (FID) - measures the delay between a user's first interaction (such as clicking a button) and the browser's response to that interaction.
  • Cumulative Layout Shift (CLS) - measures the sum of all individual layout shift scores that occur during the page's lifespan. A layout shift occurs when visible elements on a page move unexpectedly.
  • Interaction to Next Paint (INP) - measures the time it takes for the browser to respond to user interactions (such as clicks or taps) by updating the visual content on the page.
  • Time to First Byte (TTFB) - measures the time it takes for the browser to receive the first byte of data from the server after a request is made.
  • Total Blocking Time (TBT) - measures the total amount of time during which the main thread of the browser is blocked and unable to respond to user input.
  • TTI (Time To Interactive) - measures the time it takes for a page to become fully interactive, meaning all necessary resources are loaded, and the page responds promptly to user input.
  • FID (First Input Delay) - measures the delay between a user's first interaction and the browser's response to that interaction.

You can use either of these tools to get insights into your website's performance. Lighthouse is more flexible and provides more detailed information. PageSpeed Insights, on the other hand, focuses on monitoring the performance of individual pages.

The most commonly used, these two aren’t the only tools we recommend. Here are a couple of resources you’ll find useful in your performance optimization journey.

Bonus tools for a deeper dive

GTMetrix: a powerhouse that not only audits pages but also visualizes loading in a digestible format, making performance improvements tangible.

Yellow Lab Tools: an insightful dashboard that classifies and evaluates dozens of metrics. It not only pinpoints issues but also provides detailed recommendations for optimization.

These tools have been our guiding lights, helping us enhance performance and offer a superior user experience. Keep them in mind when you start to improve your website performance.

Bottom line

If your business depends on the website, you can’t sleep on its performance. This factor can move you to the top of the Google results or, on the contrary, make your users leave for faster websites that give them a better experience. To help you avoid this scenario, we shared the story of our website transformation.

Our performance odyssey isn't over as there will always be room for improvement. Lately, we've been noticing that GatsbyJS is experiencing not the best times and we are deliberating about switching to NextJS. But this is the subject of the other article.

But like there are no identical businesses, there is no single instruction on how to improve your site performance. Every case is unique and you need to address the challenges that pose the issue in your case. Not an easy task to do, but we can help. If you want to give your website a performance boost, drop us a note, and we’ll see what we can do for you.


This content originally appeared on DEV Community and was authored by Rodion


Print Share Comment Cite Upload Translate Updates
APA

Rodion | Sciencx (2024-06-19T09:31:22+00:00) How We Went from 46 to 99 Performance Score to Improve Our Website Speed. Retrieved from https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/

MLA
" » How We Went from 46 to 99 Performance Score to Improve Our Website Speed." Rodion | Sciencx - Wednesday June 19, 2024, https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/
HARVARD
Rodion | Sciencx Wednesday June 19, 2024 » How We Went from 46 to 99 Performance Score to Improve Our Website Speed., viewed ,<https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/>
VANCOUVER
Rodion | Sciencx - » How We Went from 46 to 99 Performance Score to Improve Our Website Speed. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/
CHICAGO
" » How We Went from 46 to 99 Performance Score to Improve Our Website Speed." Rodion | Sciencx - Accessed . https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/
IEEE
" » How We Went from 46 to 99 Performance Score to Improve Our Website Speed." Rodion | Sciencx [Online]. Available: https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/. [Accessed: ]
rf:citation
» How We Went from 46 to 99 Performance Score to Improve Our Website Speed | Rodion | Sciencx | https://www.scien.cx/2024/06/19/how-we-went-from-46-to-99-performance-score-to-improve-our-website-speed/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.