This content originally appeared on DEV Community and was authored by Julian Michel
Application integration is a common scenario in IT landscapes. A source system sends data that is consumed by a target system. There are various integration middleware that provide integration capabilities for third-party software. They implement features such as data quality checks or data conversation before it is sent to the target system.
Limitations of traditional integration middleware
As always, standard software is great for the use cases for which it was designed. Before working on AWS projects, I integrated shop floor data with ERP systems using traditional middleware. This software solution met all the business requirements. However, problems appeared when large amounts of data needed to be transferred. This particular software solution wasn't built for high traffic.
It was possible to implement some workarounds for this problem. However, it always felt like a workaround. For example, it was possible to create multiple parallel queues to process data in parallel. Since this was not a standard feature, each configuration had to be implemented multiple times. It worked, but it was not perfect.
How my first AWS training made an impact
When I took my first AWS trainings (AWS Cloud Practitioner and AWS Developer Associate), I was really impressed with the capabilities that AWS offers. One of them was the strong scalability capabilities - the feature that caused problems in some past projects. Since AWS SQS is a managed message queue, I thought it would be a good fit for my use case.
Implementation of a similar project with AWS
In one of my next projects, I had the opportunity to implement a similar project using AWS tools. A consumer sends messages to an SQS queue. An AWS Lambda function processes the data and posts it to an ERP system.
From a developer's perspective, this is not complex. However, it worked very well without the scalability issues. Multiple messages are processed in parallel. It scales very quickly when there are message spikes. If the target system isn't available, messages can be retried later. Basically, it is exactly the use case that SQS was built for - and it is high performance and resilient.
How to address complex requirements
For some use cases, special requirements had to be implemented.
First, for some scenarios, messages should be processed in order. SQS FIFO queues support this feature. By specifying the same message group ID, it is possible to process messages of that message group ID in sequence. In Smart Manufacturing applications this can be a production plant, a shop order or something similar.
Second, there was a need to limit the number of messages sent to the target system in parallel. AWS scales quickly, but some target systems have limited capacity. This is also possible in AWS. There is an attribute called maxConcurrency that limits the number of concurrent Lambda invocations when processing an SQS queue. When using CDK, this attribute can be specified when subscribing the Lambda function to the SQS queue:
lambdaFunction.addEventSource(new sources.SqsEventSource(queue, {
maxConcurrency: 2,
}));
Summary
In comparison to trends like GenAI, integration applications are nothing really impressive. On the other hand, this example shows that basic services like SQS can be the foundation for IT applications and IT landscapes. IT applications can be cloud applications - or on-premises applications enhanced by cloud integration services.
This content originally appeared on DEV Community and was authored by Julian Michel
Julian Michel | Sciencx (2024-09-15T22:28:55+00:00) AWS SQS as an alternative to traditional application integration middleware. Retrieved from https://www.scien.cx/2024/09/15/aws-sqs-as-an-alternative-to-traditional-application-integration-middleware/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.