This content originally appeared on DEV Community and was authored by Atsushi Suzuki
Problem
In our application architecture with Lambda + RDS Proxy + Aurora, the Lambda concurrency often exceeded the limit of 1000. Although we tried increasing the concurrency limit through AWS Service Quotas, the Aurora connection limit (2000 for db.r6g.xlarge
) couldn’t be exceeded, so this approach didn't address the underlying issue.
Managing performance and scaling for Amazon Aurora MySQL
Upon examining the RDS Proxy metrics, I noticed that DatabaseConnections
had exceeded 4000, even though the connection limit was set to 80% of the DB maximum (1600). This was clearly abnormal, so I suspected connection pinning as a potential cause.
Cause of Connection Increase Due to Pinning
According to the RDS Proxy documentation, the following note explained the problem:
Any statement with a text size greater than 16 KB causes the proxy to pin the session to the current connection.
I suspected one of the recently added APIs might be the culprit. Checking the database interactions for this API, I found that most of the query text sizes exceeded 100 KB, which confirmed that connection pinning was indeed the cause.
Quotas and limitations for RDS Proxy
Solution
While recent traffic growth had already indicated that Lambda might not be ideal for our workload, tuning the ECS on Fargate setup (task specs and task count) was still underway, so switching immediately was not feasible. To address the issue in the meantime, I decided to redirect only the problematic API’s database connections to a new, separate RDS Proxy. This would provide a short-term fix until we could fully migrate to ECS on Fargate.
Implementation Steps
1. Creating a New RDS Proxy
First, I created a new RDS Proxy specifically for the problematic API and set up a separate database connection configuration. This allowed us to use the new proxy for only the affected repository, while keeping other processes on the existing proxy.
2. Modifying NestJS Configuration
In NestJS, I used TypeOrmModule.forRootAsync
to configure two data sources. The existing data source remains the default, while I added a new data source called secondDataSource
for the additional RDS Proxy.
// NOTE: Existing DB connection
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
dataSourceFactory: async (options: DataSourceOptions) => {
const AppDataSource = new DataSource(options);
return await AppDataSource.initialize();
},
useFactory: async (configService: ConfigService) => {
const env = configService.get<string>('ENV') ?? 'local';
const dbConfig = { ...dbConfigs[env] };
return dbConfig;
},
inject: [ConfigService]
}),
// NOTE: Additional data source for new RDS Proxy
TypeOrmModule.forRootAsync({
name: 'secondDataSource', // Name for new proxy
imports: [ConfigModule],
dataSourceFactory: async (options: DataSourceOptions) => {
const SecondDataSource = new DataSource(options);
return await SecondDataSource.initialize();
},
useFactory: async (configService: ConfigService) => {
const env = configService.get<string>('ENV') ?? 'local';
const dbConfig = { ...secondDbConfigs[env] };
return dbConfig;
},
inject: [ConfigService]
}),
3. Adding DB Configurations
I created separate configurations for the two proxies, storing them as dbConfigs
and secondDbConfigs
.
export const dbConfigs: DBConfigs = {
dev: {
type: 'mysql',
host: '<endpoint>',
port: 3306,
username: '<username>',
password: '<password>',
database: '<database>',
entities,
synchronize: false
}
};
export const secondDbConfigs: DBConfigs = {
dev: {
type: 'mysql',
host: '<endpoint>',
port: 3306,
username: '<username>',
password: '<password>',
database: '<database>',
entities,
synchronize: false
}
};
4. Using the New Proxy for Specific Repository
To use the new proxy for only the problematic repository, I specified the new data source by injecting getDataSourceToken('secondDataSource')
in the repository provider. Other repositories continued using the default proxy by injecting getDataSourceToken()
.
export const SecondRepositoryProvider: FactoryProvider<ISecondRepository> = {
provide: SecondRepositoryToken,
useFactory: factory,
inject: [getDataSourceToken('secondDataSource')]
};
Result
With these changes, queries that triggered connection pinning were routed to the new proxy, allowing DatabaseConnections
in the original RDS Proxy to stay within the set limits. This approach stabilized the Lambda concurrency and allowed us to maintain operations without exceeding Aurora’s connection limits.
This content originally appeared on DEV Community and was authored by Atsushi Suzuki
Atsushi Suzuki | Sciencx (2024-11-02T06:59:23+00:00) Avoiding Connection Pinning in Lambda and RDS Proxy with NestJS and Proxy Splitting. Retrieved from https://www.scien.cx/2024/11/02/avoiding-connection-pinning-in-lambda-and-rds-proxy-with-nestjs-and-proxy-splitting/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.