Using NextJS and AWS to Scale Up Headless BigCommerce Stores

This post was written for the Clear Horizon Digital Blog. Original post: Optimising Headless BigCommerce

Overview

Today we’re exploring the patterns we can use to build highly scalable and performant online stores. Working with the BigComme…


This content originally appeared on DEV Community 👩‍💻👨‍💻 and was authored by Clear Horizon Digital

This post was written for the Clear Horizon Digital Blog. Original post: Optimising Headless BigCommerce

Overview

Today we're exploring the patterns we can use to build highly scalable and performant online stores. Working with the BigCommerce API, NextJS and AWS we'll lay out an architectural pattern that leverages 4 caching strategies to support rapidly growing custom eCommerce stores.

Headless eCommerce

Agencies or businesses working in eCommerce will likely have come across headless architectures. Headless eCommerce is the concept of decoupling your store's frontend and backend, allowing for custom store development without the hassle of maintaining and supporting a backend with regularly changing compliance requirements.

We're often asked to build out highly scalable eCommerce platforms, so we decided to partner with BigCommerce as our eCommerce PaaS provider. Due to our managed service offering, this is an ideal scenario - the flexibility to build without guide rails, while not having to worry about maintaining a fully featured eCommerce backend.

Working with the BigCommerce API

As a certified BigCommerce agency partner, we routinely build out custom eCommerce platforms using the BigCommerce API.

With any third party integration though, it comes with its own complexities. For BigCommerce it's vital that we correctly manage how and when we're calling the API for two major reasons:

1: Calls to BigCommerce are fast, but still slower than a cached response.
For example, a REST API request to BigCommerce that fetches 100 products with descriptions will come in at around 400 - 500ms on average. By contrast that latency can be reduced to sub 120ms by leveraging a CDN cache, or under 20ms when using a browser cache.

2: Requests to BigCommerce will contribute to their server load and rate limit.
BigCommerce enforces rate limits and fair use limits for their APIs. Depending on your store plan, this can vary significantly - below is a list of the limitations at the time of writing. For an up to date list you can check the BigCommerce pricing plans.

Standard plans ($29.95 / month), or Plus plans ($79.95 / month) are subject to a 150 requests / 30s rate limit.

Pro plans ($299.95 / month) are bumped up to 450 requests / 30s rate limit.

Enterprise plans are given unlimited access, however there's still an expectation that you won't knock over their infrastructure with huge numbers of requests.

Ultimately the more caching we can implement, the faster our store will be and the less concern we need to have over BigCommerce rate limits and increasing infrastructure costs.

GraphQL vs. REST

One final thing to mention before getting stuck into our caching options is that BigCommerce exposes both a REST and GraphQL API. Utilising GraphQL gives us the ability to fetch multiple sets of data from a single endpoint - thereby reducing the number of requests we're making (i.e fetching a list of products, categories and brands in one request).

Currently the BigCommerce GraphQL API isn't currently as fully featured as its REST counterpart, so you'll likely find yourself needing to use a combination of both for your storefront.

Our Caching Options

We're going to be looking at four caching techniques we use to effectively work with the BigCommerce API for online stores that need to be scalable and performant.

  • ISR (Incremental Static Regeneration)
  • Browser Level
  • CDN Level
  • API Level

Easy Wins: ISR with NextJS

ISR or Incremental Static Regeneration “allows you to create or update static pages after you've built your site”. ISR is a dream feature for eCommerce frontends, where there are often thousands of product pages that rarely change, but must be fetched from an external CMS.

Pages with ISR enabled have a configurable expiry or ‘revalidate' time - in essence they provide a JSON cache of the initial page load props. Requests to the page will follow one of 3 routes. For this example let's assume a revalidate time of 30 seconds.

  • 1) The page has never received a request before. The page is loaded as normal, but the API requests made are then added to a server side cache.
  • 2) The page has received a request in the previous 30 seconds. The page is loaded and initial props for the page are served from the NextJS server side cache.
  • 3) The page has not received a request in the past 30 seconds. The page is loaded and the stale cache is served. Behind the scenes NextJS will refresh the cache for the next user.

This pattern of serving a cached set of initial page props is an easy win - there's next to no configuration required and enables us to bundle up multiple API requests in a single cache. If you notice you're brushing up against the BigCommerce rate limits, you can easily tweak the revalidate time to quickly change your caching policy for individual pages.

For pages that warrant a long revalidate time, but also need to be up-to-date after modifications (i.e. popular product pages that are undergoing urgent pricing changes) there's also the option for on-demand revalidation to manually purge the NextJS cache.

Why can't we just use ISR everywhere?

ISR is an incredible tool to have in your arsenal, but it needs to be used in combination with other caching strategies.

In some cases you'll need more flexibility than ISR can provide. One common scenario is the use of query parameters. Almost all eCommerce websites will use query parameters in one way or another, be it for search functionality, filtering or pagination. With no current support for query parameters in ISR, you're left with the option of converting all query parameters to path parameters or disabling ISR for that page.

There are times where ISR can be intrinsically slower than alternatives. If you have multiple pages that are making the exact same request, it's often wiser to extract that request out into a browser-side call. For example, a promotional banner that appears on all pages should be fetched using the useSWR hook, then stored in state or cached by the browser using a cache-control header.

Architectural Considerations

One approach we always recommend when building headless BigCommerce stores is to add a basic API proxy layer between the frontend and the BigCommerce API. This gives us more finite control over the data that is provided to the frontend and opens the door to database, browser side and CDN caches.

The following HLD is a simplified version of the architecture that we recommend for AWS hosted headless BigCommerce projects.

  • Web Browser: The project ‘frontend', this is where we client-side requests will be sent from. For the purpose of this HLD we haven't included NextJS server side requests, which would be made from inside the AWS infrastructure.
  • Cloudfront: The AWS managed CDN. This is our first server-side caching opportunity and routes requests to the API origin.
  • API Layer: The flavour of the API isn't relevant for this implementation. We tend to prefer NodeJS in Lambda for its pricing, performance and scalability.
  • ElasiCache: An ephemeral database for use cases where we are augmenting data from the BigCommerce API with separately cacheable data.
  • CloudWatch: AWS's log aggregation / visualisation tool. This is where we'll set up alerting for API level errors / warnings. Note: Depending on the scale of the project, ElastiCache is not always necessary - we tend to include it when there are additional third party data providers separate to BigCommerce.

HLD

1 - Browser Caching

By proxying requests via a simple NodeJS API, we gain the ability to add the cache-control header to responses. Using the cache-control header in combination with the max-age and stale-while-revalidate directives, we can instruct the customer's browser to store a response in its local cache.

This is handy when we know a single user will make the same request multiple times and the response won't frequently change. Unlike database or CDN caches, browser caching doesn't incur any additional cloud charges as all responsibility is handed off to the customer's web browser.

Because browser caches are kept on the client side they're by far the fastest. There is no network overhead which results in a significant performance increase. Browser caching is a cheap and quick way of reducing the number of requests we make to both our own servers and the BigCommerce API - further amplifying performance and steering us away from hitting the rate limits.

Why can't we just use browser caching everywhere?

Browser caches are only going to be able to cache one user's request at a time, so we'll only see significant gains in cases where a single user is likely to make the same requests multiple times. If multiple users are making the same request only once then browser caching won't have any effect.

When using the cache control header we must also consider the risk of urgent changes. Because caches are handed off to the browser, they're out of our control to purge. Long lived caches must be waited out - even if an emergency change to an API response is made (i.e. pricing errors), the browser will not find out until the max-age has elapsed.

2 - CDN Level Caching

Arguably the most effective all-round cache, by standing up a Content Delivery Network (CDN) like CloudFront upstream of our APIs, we can apply granular caching rules for individual paths. This short circuits the request to both BigCommerce and our NodeJS API layer by returning data directly from Cloudfront.

Where possible we should always prefer a CDN level cache over an ephemeral database cache due to the reduced round trip time and lower compute charges. Unlike browser caching, a Cloudfront cache can be manually purged in the case of urgent API response changes (i.e pricing errors, incorrect promotions, etc).

CloudFront caching will always outperform ElastiCache in terms of speed and cost due to the shorter journey a given request will take. It's a great first port of call for improving performance and reducing the risk of rate limiting.

Why can't we just use CDN level caching everywhere?

We can… sort of. In some cases though it's more effective to store data closer to the API handler. There are a couple of common scenarios where we may want to store our cached data closer to the API handler.

Where there are multiple data sources that want to be cached at different rates (for example if we were to augment our frequently changing product data with relatively static content from a separate CMS).
Where multiple API handlers need access to the same cached data. I.e. data that is being constantly ingested from a third party live pricing service.

3 - Ephemeral Database Caching

Utilising an in-memory database is a popular option for caches that may need to be accessed by multiple API endpoints. If you're hosting on AWS the obvious answer is ElastiCache with Redis.

ElastiCache is the most expensive cache, as requests are still being made to our API endpoint and that endpoint is still going to need to make a connection to ElastiCache.

Ephemeral database caching is particularly valuable when we're having to combine data from multiple sources, such as BigCommerce and a CMS. We're able to make the requests once, then store any cacheable results in ElastiCache. This gives us the added benefits of being cached in one central database, so multiple API endpoint handlers can access it independently.

Why can't we just use Ephemeral Database caching everywhere?

This type of caching is the most expensive, both in terms of price and time. When possible we should always avoid increasing the request journey by short circuiting it at either the CDN level or the browser level. In eCommerce we consider it a last resort when we cannot implement the cache-control header or a Cloudfront cache.

Summary

In this post we've talked about a number of ways that we can utilise caching, NextJS and AWS to optimise our headless BigCommerce architecture. Having a quick storefront is critical for driving customer conversion and increasing user retention. Understanding how to effectively work with the BigCommerce API further expands the ways that we can build features into an online store.

About Us

Clear Horizon Digital is a Leeds based Software Consultancy with a track record of building high converting custom eCommerce platforms. As a Certified BigCommerce Agency Partner, we deliver headless eCommerce solutions for companies that are looking to scale up their online offering through a bespoke platform.


This content originally appeared on DEV Community 👩‍💻👨‍💻 and was authored by Clear Horizon Digital


Print Share Comment Cite Upload Translate Updates
APA

Clear Horizon Digital | Sciencx (2023-02-08T20:14:38+00:00) Using NextJS and AWS to Scale Up Headless BigCommerce Stores. Retrieved from https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/

MLA
" » Using NextJS and AWS to Scale Up Headless BigCommerce Stores." Clear Horizon Digital | Sciencx - Wednesday February 8, 2023, https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/
HARVARD
Clear Horizon Digital | Sciencx Wednesday February 8, 2023 » Using NextJS and AWS to Scale Up Headless BigCommerce Stores., viewed ,<https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/>
VANCOUVER
Clear Horizon Digital | Sciencx - » Using NextJS and AWS to Scale Up Headless BigCommerce Stores. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/
CHICAGO
" » Using NextJS and AWS to Scale Up Headless BigCommerce Stores." Clear Horizon Digital | Sciencx - Accessed . https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/
IEEE
" » Using NextJS and AWS to Scale Up Headless BigCommerce Stores." Clear Horizon Digital | Sciencx [Online]. Available: https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/. [Accessed: ]
rf:citation
» Using NextJS and AWS to Scale Up Headless BigCommerce Stores | Clear Horizon Digital | Sciencx | https://www.scien.cx/2023/02/08/using-nextjs-and-aws-to-scale-up-headless-bigcommerce-stores/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.