This content originally appeared on Level Up Coding - Medium and was authored by Antonio Lagrotteria
Migration of legacy frontends is becoming a very common activity in software engineering, partially due to the relentless evolution of JavaScript ecosystem and frameworks during the past decade. In this context, micro-frontends are emerging as a valid option to rewrite legacy applications in a newer framework. Let’s look at a effective yet simple approach where you can apply a canary release strategy via CloudFront functions by strangling legacy apps and migrate them to a micro-frontend app.
Canary in a cold mine to the rescue
Rewrite of production apps is, thought sometimes necessary, a concrete risk for the business and for customer experience. We need conservative approaches to do a partial switch: canary releasing.
The goal is to slowly move a small percentage of user traffic from the legacy to the new app. Seemingly to the poor birds job, canary releases are a very effective approach to:
- support quick rollback strategies when something goes south.
- provide instant feedback by gaining quick insights about functionality, scalability and performance of a newly released product
- culture of learning about failures which then can be analyzed through the collected data.
Why CloudFront functions
CloudFront functions are recent addition to the already humongous portfolio of AWS features. In layman’s terms, they are sort of Lambda@Edge on steroids with the main difference of them scaling easier at very low latency, running even geographically closer to customers, by being executed on edge location rather than on regional edge cache. The function will redirect our users between our legacy and new app based on a cookie strategy.
As shown above, assumption is that we have both an existing production legacy application (deployed on AWS or on premise) and a new web-app based on a micro-frontend architecture (on production AWS but still not yet publicly accessible). Let’s see how CloudFront functions dispatch traffic toward the two setup.
Setup CloudFront functions
Functions can be triggered after CloudFront receives a request from a viewer (viewer request) and before CloudFront forwards the response to the viewer (viewer response).
When creating functions, we need to associate them to a CloudFront distribution which points to the S3 origin where the default legacy app lies. Let’s see it in action below (AWS guide here).
Important: for this PoC I opted to point to the legacy app’s origin, but it can be easily be swapped to use the micro-frontend one.
Below shows the content of the function:
The function itself checks whether an X-Source cookie is present in the request.
- The percentage of users is driven by the experiment traffic constant. When set to zero all traffic goes to legacy, otherwise if one, all goes to micro-frontend.
- If X-Source is not present (first time visitor), the function randomly chooses its value (whether is the legacy or the micro-frontend app) and assigns it to the request cookie.
- If X-Source is present, the function redirects the user to the app type defined in the value of the cookie, either legacy-app or mfe-app.
- When returning a request, we prepend the index.html to the default root url of the CloudFront distribution as SPAs need this trick, as shown here.
Similarly, let’s see the content of the viewer response function:
This is a simpler function as it sets the cookie with the right value in the response after we come back from the cache.
Result
When first time users access the URL distribution, they will get assigned a cookie which will manage the state where they will be redirected to. In below image, I manually changed it to speed up the process by modifying the X-Source cookie in the web console. Legacy app is the orange Login one while new micro-frontend is just the typical Angular blue one.
Limitations & ideas
Following limitations and ideas could be enhanced:
- Biggest limitation, compared to Lambda@Edge is the lack of support to fetch external configuration in CloudFront functions. It’s convenient to store additional configuration in S3 or DynamoDB without the need to update the distribution code, but probably the trade-off is that by avoiding that, CloudFront functions reach a superior performance in scaling and low latency
- Use console.log statements to monitor functions. Log groups will be visible in CloudWatch , only on North Virginia region.
- Traffic configuration could be more complex than shown in this PoC, in that case a Lambda@Edge could be more appropriate.
- Similar approach can be used to replace some URI of the old application with new ones.
Summary
Rewriting efforts are costly and businesses should be careful and have very both good business and technical reasons to do so (unsupported frameworks, revamping products, team growth, business case in danger, etc..). Canary releases and CloudFront are supported us to re-evaluate the performance and usability of the application, potentially revisit user flows and gain insights and learning about architectural decisions (micro-frontends) through data.
References
- Introducing CloudFront functions — Run Your Code at the Edge with Low Latency at Any Scale
- Building serverless micro frontends at the edge (Re:Invent 2019)
Canary release from legacy apps to micro-frontends via CloudFront functions was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Level Up Coding - Medium and was authored by Antonio Lagrotteria
Antonio Lagrotteria | Sciencx (2021-11-04T04:07:23+00:00) Canary release from legacy apps to micro-frontends via CloudFront functions. Retrieved from https://www.scien.cx/2021/11/04/canary-release-from-legacy-apps-to-micro-frontends-via-cloudfront-functions/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.