Why local development for serverless is an anti-pattern

In the serverless community, individuals and teams spend a lot of time and effort attempting to build an environment that is a replica of the cloud. Why? Because this is what we have always done. When you start your career building applications for the…


This content originally appeared on DEV Community and was authored by Gareth McCumskey

In the serverless community, individuals and teams spend a lot of time and effort attempting to build an environment that is a replica of the cloud. Why? Because this is what we have always done. When you start your career building applications for the web, we were told you need to have a local development environment on your own machine and you do your work against that environment before pushing to a your code repository.

But I am going to argue that this absolute requirement to get up and running when building applications is not only unnecessary in the serverless world but actually harmful.

Lets start considering the whys. Why do we create local development environments in the first place? What purpose do they actually serve?

If you look back at where we have come from building for the web, we used to exist in a world where our code and scripts were exceedingly minimal and work was essentially done directly on the machines that served our application to the web. Why? Because these machines were often very specialised ones that were impossible to replicate without great expense and aiming for 100% uptime was not necessarily the biggest goal at that stage so why not? Its easy to just edit files directly on that remote machine.

Push things a few years later down the line and we are now in a position where we need to make changes multiple times a day to an application that must not go down if we can avoid it. Editing directly on production becomes scary because we would like to test this application first if we could.

Luckily, at this stage, a lot of the infrastructure for the web has gotten commoditised; we can use a regular consumer computer and install the same (or similar applications) to it to simulate the remote environment and test our application before pushing to the production server.

However, things couldn't stay this way. Traffic increased, and single machines soon no longer became enough to handle the load the growth of the Internet created. Clusters of machines were needed with comparatively complex architectures to both increase request throughput and resiliency to failure as downtime became more and more costly. No longer was the replicated development environment on a developers machine a pretty-close replica.

This is where a lot of the staging or development environments begin to come from. The thinking is, let developers develop on their local machines as they have done because that's what they are used to, and we will spin up as close to a replica of production we can in order to test against to make sure it wont break anything,even if its costly to the business, because that's better than down time.

The cloud certainly helped a lot in this as well; if you can create staging environments on command and only put them up when needed, its not quite as expensive as keeping a development cluster in parallel in a server rack.

However, the issue is that our local machines were, at best, only occasionally accurate to the production cluster, and usually required developers to be constantly pushing code to the shared staging server for testing purposes as the architectures were just too complex to ever hope to replicate locally and made any kind of local testing redundant. Not to mention, in teams, this resulted in a lot of stepping on toes and waiting for your turn to test your changes!

What was really needed was a replica of production for every developer in the team. But with production clusters running multiple virtual machines, load balancers, relational databases, caches, etc, this is cost prohibitive.

Then containers arrived. Finally! Now we can package up the complexity of our production systems into neat little blocks that don't interfere with each other and we can get closer to production by running them on our own development machines.

Except, they do interfere with each other, and added huge amounts of complexity for developers to have to handle and worry about. Expensive engineers should be building features and generating revenue instead of managing their development environment and it STILL wasn't as accurate a representation of the production environment it should be!

At one point, I was an engineer for an e-commerce organisation and they siloed a single developer off for two months to replicate production as a collection of docker containers we could just install on our machines. The end result was a process that took 30 minutes just to install and required the entire development team to have their hardware upgraded to at least 16 GB of RAM. Running Nginx, ElasticSearch, Redis and MySQL on a single machine apparently uses a lot of memory; who would have thought. And we STILL had constant issues when we thought our code was ready to be tested against the staging environment and it just wasn't.

This is just one example of many I have to share.

The TL;DR of the above? We used local testing because testing against production became too dangerous,tried to replicate production locally and failed miserably to today where we are, essentially, still testing against production

And now, in the world of serverless development, here we are once again, trying to make things run locally that really shouldn't. And this isn't a collection of virtual machines or docker containers we can kinda of get to run locally to some semblance of accuracy. These are cloud services for which most have no official way to run locally and probably never will. The existing emulation techniques used in tools like Localstack are impressive but not an exact replica of the cloud; they are the best effort someone has made to allow us to kind of sort of test these services locally with something resembling the cloud version. Not to mention all the aspects of the cloud (and distributed application architectures) that can throw a spanner in the works. How do you replicate intra-service latencies, IAM, service limits and so many other aspects of the cloud that aren't related to a specific service

We also don't even need to! With tools like the Serverless Framework (I know there are others I have just not used them to the same level of familiarity as the Serverless Framework) that gives you the ability to deploy the exact same configuration of resources we deploy into production in any other environment we choose. Want a shared environment for the developers of the team to test against? Just run the deploy command! Want your own "local" environment to test against? Just run the deploy command!

Finally! We are in a position where we can 100% replicate the infrastructure in production and, because of serverless application's propensity to bill for usage, it costs you nothing to deploy them and pennies if you do testing against them!

So why are we still fighting so hard to maintain the local environment? Probably because of the feared lack of productivity. To answer this, I am going to point to a recently published post by a compatriot of mine at Serverless, Inc, who wrote up a great way to look at "local" development for serverless and the very few tools you need to accomplish this. Check it out here. The amount of time spent managing a local development environment, updating it, making sure it keeps running, is costly in itself. But there is another good reason to not consider it!

Its actually bad for your application!

Consider a group of developers using an emulation tool like Localhost. It does an ok job at allowing the developers of the team to build and test their serverless applications locally. However, one of the members on the team spots a really useful cloud service that could be used to build the best possible solution to a problem they are trying to solve. It can improve the reliability of the application as a whole, decreases costs and time to production. However, this service is not (yet) provided by the local emulation tool.

They now have three choices. Use the service anyways, meaning that testing in the cloud is now an absolute requirement but the application is now better for it. However this kind of makes the local testing environment entirely irrelevant. Or, don't use the service and essentially hamstring the efficacy of your application because the local testing environment is sacrosanct. Or lastly, spend days or maybe even weeks trying to find a way to replicate this service locally, delaying deployment of this feature and _still having a sub standard replica of a cloud service to test against, assuming you find a workable solution to begin with.

What about tools like serverless-offline? Nice and simple and lets you just easily test against your HTTP endpoints? Right?

Well, besides the fact that, yet again, this is not an accurate representation of the cloud and completely ignores the oddities of services such as API Gateway, IAM, etc, it is also only good for http events. More and more we see serverless applications doing more than just be glorified REST API's. You cannot test all the other events that can trigger your Lambda functions.

Local development seems, at face value, to be efficient and simple. It is a necessary evil in the traditional web development world because traditional architectures are too costly and unwieldy to replicate exactly for every developer of a team. But serverless architectures cost nothing to deploy and minimal (or often free) to run tests against, and can be exact replicas of production when deployed into the cloud.

Just because it is familiar doesn't mean its a good idea. With tools like the Serverless Framework and others out there offering the ability to deploy only code in mere seconds, invoke functions directly from your local machine to the remote Lambda and even tail the logs in your terminal to get instant feedback on errors, you do not need to lose productivity but can drastically decrease complexity and accuracy to production.

If anyone has any questions sound out in the comments or even hit me up on Twitter. My DM's are open and I love discussing serverless topics!!


This content originally appeared on DEV Community and was authored by Gareth McCumskey


Print Share Comment Cite Upload Translate Updates
APA

Gareth McCumskey | Sciencx (2021-06-02T07:56:03+00:00) Why local development for serverless is an anti-pattern. Retrieved from https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/

MLA
" » Why local development for serverless is an anti-pattern." Gareth McCumskey | Sciencx - Wednesday June 2, 2021, https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/
HARVARD
Gareth McCumskey | Sciencx Wednesday June 2, 2021 » Why local development for serverless is an anti-pattern., viewed ,<https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/>
VANCOUVER
Gareth McCumskey | Sciencx - » Why local development for serverless is an anti-pattern. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/
CHICAGO
" » Why local development for serverless is an anti-pattern." Gareth McCumskey | Sciencx - Accessed . https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/
IEEE
" » Why local development for serverless is an anti-pattern." Gareth McCumskey | Sciencx [Online]. Available: https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/. [Accessed: ]
rf:citation
» Why local development for serverless is an anti-pattern | Gareth McCumskey | Sciencx | https://www.scien.cx/2021/06/02/why-local-development-for-serverless-is-an-anti-pattern/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.