This content originally appeared on DEV Community and was authored by Tugay Ersoy
In today's world, services can be deployed frequently throughout the day due to needs, and new developments can be implemented for specific services. Sometimes these services may be ones where momentary interruptions are not problematic, and other times they may be services where even a single request experiencing an interruption could negatively affect the customer side. In this article, I will discuss what we can do to eliminate this interruption, both for Kubernetes and a service developed with ASP.NET.
You can access the Turkish version of this article
Content
- Termination, creation, and update strategies of pods on the Kubernetes side
- Examination of the
Host
structure on the .Net side - Examination of
IHostedLifecycleService
,IHostedService
, andIHostApplicationLifetime
interfaces - Examination of the Host's shutdown process
- Creating a Kubernetes Cluster with
Kind
- Creating a sample
.Net
project,Dockerfile
, andDeployment
manifest - Deploying the service to the Kubernetes Cluster and performing the test
- Delay between Kubernetes Ingress and Kubernetes Control Plane
Update Strategy on Kubernetes
On the Kubernetes side, the update strategy is specified under .spec.strategy.type
in the deployment object's manifest. This strategy is either Recreate
or the default behavior RollingUpdate
if not specified.
This article assumes that the application is published as a Deployment object. For StatefulSets and DaemonSets objects, these strategies are determined under the
.spec.updateStrategy.type
spec and these strategies areOnDelete
andRollingUpdate
.
Recreate
spec:
replicas: 10
strategy:
type: Recreate
In the Recreate
update strategy, all pods are terminated first, and then pods belonging to the new version are brought up. According to the manifest given above, Kubernetes will first kill all 10 pods and then bring up new pods. This strategy will likely cause down-time (interruption). This is because Kubernetes terminates all pods before new pods are created and take their place.
RollingUpdate
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
In the RollingUpdate
update strategy, pods are gradually replaced with the new version image. First, a pod is created with the new image number, and then the pod with the old image number is killed. This operation continues until it is performed for all pods.
The maxSurge
and maxUnavailable
parameters are specified in the RollingUpdate
strategy.
-
maxUnavailable
: Specifies the number of pods that can be unavailable during the update phase. The value can be given as a percentage or directly as the number of pods. It's an optional field, and the default value is25%
. -
maxSurge
: Expresses the number of pods that can be above the number of replicas specified in the Deployment manifest. LikemaxUnavailable
, a percentage or direct pod number value can be given. It's an optional field, and the default value is25%
. > When percentage values are given for the fields mentioned above, if the number of pods is not an integer, this value is rounded down formaxUnavailable
and rounded up formaxSurge
. For example, when 1 replica is specified, themaxSurge
value corresponds to 1 by rounding up and themaxUnavailable
value corresponds to 0 by rounding down. In this case, a pod is created first, and after it transitions to the running state, the existing pod is terminated.
With the Rolling Update strategy, Kubernetes ensures that the application is deployed with minimal interruption. Below are the stages that occur during a Rolling Update operation for a deployment with a single replica, and a sample deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
limits:
cpu: "0.3"
memory: "75Mi"
requests:
cpu: "0.1"
memory: "50Mi"
strategy:
type: RollingUpdate
When the kubectl set image deployment/nginx-deployment nginx=nginx:1.14.1
command is run to update the image version, the following stages are performed:
- Since the
maxSurge
value is 1, a pod with the new image number is first started to be brought up. - After the pod status transitions to
Running
, Kubernetes sends aSIGTERM
signal to the old Pod. The routing of new requests to the Pod is stopped, and open connections are waited for to complete requests already in progress for the duration ofspec.terminationGracePeriodSeconds
. - If the termination of the Pod takes longer than
spec.terminationGracePeriodSeconds
, Kubernetes sends aSIGKILL
signal and kills the pod.
For the deployment above, the pod statuses when updating the image version are given in order below:
kubectl get pods -n default -w
NAME READY STATUS RESTARTS AGE
nginx-deployment-59994fb97c-5j4fv 1/1 Running 0 8m8s
nginx-deployment-59994fb97c-g789c 1/1 Running 0 8m9s
nginx-deployment-59994fb97c-nddlf 1/1 Running 0 8m9s
nginx-deployment-5fffc966ff-8crmb 0/1 Pending 0 1s
nginx-deployment-5fffc966ff-8crmb 0/1 Pending 0 1s
nginx-deployment-5fffc966ff-8crmb 0/1 ContainerCreating 0 1s
nginx-deployment-5fffc966ff-8crmb 1/1 Running 0 1s
nginx-deployment-59994fb97c-5j4fv 1/1 Terminating 0 8m16s
nginx-deployment-5fffc966ff-52knq 0/1 Pending 0 0s
nginx-deployment-5fffc966ff-52knq 0/1 Pending 0 0s
nginx-deployment-5fffc966ff-52knq 0/1 ContainerCreating 0 0s
nginx-deployment-59994fb97c-5j4fv 0/1 Terminating 0 8m16s
nginx-deployment-5fffc966ff-52knq 1/1 Running 0 1s
nginx-deployment-59994fb97c-g789c 1/1 Terminating 0 8m18s
nginx-deployment-5fffc966ff-jwmtt 0/1 Pending 0 0s
nginx-deployment-5fffc966ff-jwmtt 0/1 Pending 0 0s
nginx-deployment-5fffc966ff-jwmtt 0/1 ContainerCreating 0 0s
nginx-deployment-59994fb97c-5j4fv 0/1 Terminating 0 8m17s
nginx-deployment-59994fb97c-5j4fv 0/1 Terminating 0 8m17s
nginx-deployment-59994fb97c-5j4fv 0/1 Terminating 0 8m17s
nginx-deployment-59994fb97c-g789c 0/1 Terminating 0 8m18s
nginx-deployment-5fffc966ff-jwmtt 1/1 Running 0 1s
nginx-deployment-59994fb97c-g789c 0/1 Terminating 0 8m19s
nginx-deployment-59994fb97c-g789c 0/1 Terminating 0 8m19s
nginx-deployment-59994fb97c-g789c 0/1 Terminating 0 8m19s
nginx-deployment-59994fb97c-nddlf 1/1 Terminating 0 8m19s
nginx-deployment-59994fb97c-nddlf 0/1 Terminating 0 8m19s
nginx-deployment-59994fb97c-nddlf 0/1 Terminating 0 8m20s
nginx-deployment-59994fb97c-nddlf 0/1 Terminating 0 8m20s
nginx-deployment-59994fb97c-nddlf 0/1 Terminating 0 8m20s
The
spec.terminationGracePeriodSeconds
value specified is defined on a per-pod basis. The default value is 30s.Before sending the
SIGTERM
signal to sidecar containers, the main container in the same pod is terminated, and then theSIGTERM
signal is sent to the sidecar containers in the reverse order of their definition. This ensures that the sidecar container is terminated when it's no longer needed within the pod.The nginx application in the Deployment quickly terminates the process when it receives the
SIGTERM
signal, including open connections. Therefore, for it to be gracefully shutdown, it should be tracked when theSIGTERM
signal is received and aSIGQUIT
signal should be given. Nginx waits for theSIGQUIT
signal to perform a graceful shutdown on its side. This can be done with a bash operation.
.Net Host Model
The new Host
model approach that came with the .Net Core framework aims to encapsulate the resources and lifecycle functions that the application will need within the Host
object. With this structure, it also aims to remove a lot of boilerplate code included in default templates. By making certain arrangements on this object, the following functionalities come ready by default for the application type and can be arranged as needed:
- Dependency Injection (DI)
- Logging
- Configuration
- App Shutdown process
-
IHostedService
implementation
The specified 'Host' model approach has shown differences in the application templates as the versions of the .Net framework have changed. Below are three different approaches specified with .Net versions:
With .NET Core 2.x
version, the Host
model is created and configured with the CreateDefaultBuilder
method under the WebHost
class.
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}
Above is the code block in the Program.cs
file when an ASP.Net Core
project is created from scratch with the dotnet
cli in the specified version.
Between .NET Core 3.x
and .NET 5
versions, a major change was made in the Host
model approach to create web projects through the generic Host
as well. With this change, worker services, gRPC services, Windows services can be developed using the same base code through the Host
model. With this method, the Host
model is built on the IHostBuilder
interface instead of the IWebHostBuilder
interface.
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
}
Above is the code block in the Program.cs
file of the ASP.NET
project created from scratch with the dotnet
cli between .NET Core 3.x
and .NET 5
versions.
It should be noted that in both approaches shared above, the Startup
class is tightly coupled on the web application, but we mentioned that different applications can be developed in the Host
model approach positioned on the IHostBuilder
interface besides web applications. These applications may not need the Startup
class. (For example, the Configure
method is used to set up the middlewares in the application, but there is no need for such configuration for worker services.) For this reason, framework developers have overcome this situation with the ConfigureWebHostDefaults
extension method.
With .NET 6
, the configurations made on two different files (Startup.cs
and Program.cs
) were combined into a single file, and the Host
model was positioned on the IHostApplicationBuilder
class. In the migration notes, they referred to this approach as Minimal Hosting
and positioned Minimal API
as the default web template.
namespace Example.Processor.Api;
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
}
}
Above is the code block in the Program.cs
file of the ASP.NET
project created from scratch with the dotnet
cli with .NET 6
version. As you can see, all configurations are positioned in a single file and there is no Startup
class. Additionally, with this approach, the WebApplication.CreateBuilder
method is used instead of the Host.CreateDefaultBuilder
method, and IHostApplicationBuilder
is returned instead of IHostBuilder
. .NET developers introduced the Host.CreateApplicationBuilder
method with .NET 7
and recommended continuing the Host
model approach for Web and Non-Web applications as specified below. You can reach David Fowler's comment on this topic.
- Example approach for Web Applications
var builder = WebApplication.CreateBuilder();
builder.Logging.AddConsole();
builder.Services.AddOptions<MyOptions>().BindConfiguration("MyConfig");
builder.Services.AddHostedService<MyWorker>();
var app = builder.Build();
app.MapGet("/", () => "Hello World");
app.Run();
- Example approach for Non-Web Applications
var builder = Host.CreateApplicationBuilder();
builder.Logging.AddConsole();
builder.Services.AddOptions<MyOptions>().BindConfiguration("MyConfig");
builder.Services.AddHostedService<MyWorker>();
var host = builder.Build();
host.Run();
The WebApplication
class has implemented 3 interfaces necessary for the web application:
-
IHost
- Responsible for starting and terminating the Host. -
IApplicationBuilder
- Used to create middleware pipelines -
IEndpointRouteBuilder
- Used for endpoints
At the same time, the following 3 services are automatically registered to the DI container when the HostApplicationBuilder.Build()
method is called:
-
IHostApplicationLifetime
- Used to handle graceful shutdown and post-startup operations by injecting it into any class -
IHostLifetime
- Controls when the application will start or end. -
IHostEnvironment
- Used to get information such as application name, Root Path, Environment Name, etc.
Examining IHostApplicationLifetime
, IHostLifecycleService
, IHostedService
Interfaces
To understand the specified interfaces and the Lifetime events implemented with these interfaces, an example IHostedService
implementation is provided below;
BackgroundService
is anabstract
class and is used to create background services.IHostedService
is an interface implemented byBackgroundService
. It contains methods to manage theHost
.Worker Service
is a template for creating background services. It is created with thedotnet new worker
command.It is possible to create services that will perform background processing by only implementing
IHostedService
, but here it is necessary to manually implement the graceful shutdown operation in the service by listening to the application's lifetime events. InBackgroundService
, the graceful shutdown operation can be performed more easily by checking theCancellationToken
passed as a parameter to the overriddenExecuteAsync
method in the operations.
public class Worker : IHostedLifecycleService
{
private readonly ILogger<Worker> _logger;
public Worker(ILogger<Worker> logger, IHostApplicationLifetime applicationLifetime)
{
applicationLifetime.ApplicationStarted.Register(OnStarted);
applicationLifetime.ApplicationStopping.Register(OnStopping);
applicationLifetime.ApplicationStopped.Register(OnStopped);
_logger = logger;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedService StartAsync has been called");
return Task.CompletedTask;
}
public Task StartedAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedLifecycleService StartedAsync has been called");
return Task.CompletedTask;
}
public Task StartingAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedLifecycleService StartingAsync has been called");
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedService StopAsync has been called");
return Task.CompletedTask;
}
public Task StoppedAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedLifecycleService StoppedAsync has been called");
return Task.CompletedTask;
}
public Task StoppingAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("IHostedLifecycleService StoppingAsync has been called");
return Task.CompletedTask;
}
private void OnStarted()
{
_logger.LogInformation("IHostApplicationLifetime OnStarted has been called");
}
private void OnStopped()
{
_logger.LogInformation("IHostApplicationLifetime OnStopped has been called");
}
private void OnStopping()
{
_logger.LogInformation("IHostApplicationLifetime OnStopping has been called");
}
}
I mentioned that I needed to implement IHostedService
because I wanted to do background processing. The reason I use IHostedLifecycleService
here is that this interface inherits from IHostedService
. This interface was introduced with .Net 8
. This way, we can more easily intervene in the application's lifecycle cycle and perform operations. It contains 4 new methods. These are StartingAsync
, StartedAsync
, StoppingAsync
and StoppedAsync
. Additionally, I mentioned that IHostApplicationLifetime
is automatically registered when the host is built. The 3 properties under this class are actually CancellationToken
s and are triggered according to the Host's lifetime. I register to these tokens as I mentioned above.
I register the Hosted Service
I created above to the DI container as follows:
using Example.Worker.Service;
var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddHostedService<Worker>();
var host = builder.Build();
host.Run();
When I run the application and then terminate it with CTRL+C
, I get the following console output:
info: Example.Worker.Service.Worker[0]
IHostedLifecycleService StartingAsync has been called
info: Example.Worker.Service.Worker[0]
IHostedService StartAsync has been called
info: Example.Worker.Service.Worker[0]
IHostedLifecycleService StartedAsync has been called
info: Example.Worker.Service.Worker[0]
IHostApplicationLifetime OnStarted has been called
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Codes\Example.Worker.Service
info: Example.Worker.Service.Worker[0]
IHostApplicationLifetime OnStopping has been called
info: Microsoft.Hosting.Lifetime[0]
Application is shutting down...
info: Example.Worker.Service.Worker[0]
IHostedLifecycleService StoppingAsync has been called
info: Example.Worker.Service.Worker[0]
IHostedService StopAsync has been called
info: Example.Worker.Service.Worker[0]
IHostedLifecycleService StoppedAsync has been called
info: Example.Worker.Service.Worker[0]
IHostApplicationLifetime OnStopped has been called
C:\Codes\Example.Worker.Service\bin\Debug\net8.0\Example.Worker.Service.exe (process 35456) exited with code 0.
When we check the output, it appears that the order is as follows:
-
IHostedLifecycleService.StartingAsync
is called. This is the method called before the application starts. -
IHostedService.StartAsync
is called. This is the method called when the Host is ready to start the service. -
IHostedLifecycleService.StartedAsync
is called. It is called immediately afterIHostedService.StartAsync
, that is, after the startup operation is completed. -
IHostApplicationLifetime.ApplicationStarted
indicates that the Host has fully started.
After the application is stopped, it is observed that the following sequence takes place:
-
IHostApplicationLifetime.ApplicationStopping
is triggered when the application starts to perform the graceful shutdown operation. -
IHostedLifecycleService.StoppingAsync
is called just before the application starts to shut down. It is located just before theIHostedService.StopAsync
operation. -
IHostedService.StopAsync
performs the graceful shutdown operation of the application. -
IHostedLifecycleService.StoppedAsync
is called when the graceful shutdown operation of the application is completed. -
IHostApplicationLifetime.ApplicationStopped
indicates that the graceful shutdown operation of the application has been completed.
Another important point here is that the application is being terminated with the CTRL+C
combination. The Host has automatically registered the IHostLifetime
interface as ConsoleLifetime
by default. This is also valid for Web and Background Service. I mentioned that the IHostLifetime
interface controls when the application will start and end. In ConsoleLifetime
, we give a SIGINT
signal to the application by pressing the combination I mentioned above. For the reasons I explained in the first section, Kubernetes sends a SIGTERM
signal to the container to terminate the pod. As a result of these specified signals, a graceful shutdown operation is performed.
Before
.Net 6
, posix signals were not supported and handled according to signal type. After.Net 6
,ConsoleLifetime
can perform graceful shutdown operation by listening toSIGINT
,SIGQUIT
,SIGTERM
signals.
Host's Shutdown Process
I mentioned that by default, in .Net 6
and later, the generic host implements the IHostLifetime
interface as ConsoleLifetime
. For the Host to be gracefully terminated, this operation can be performed by sending the following signals to ConsoleLifetime
:
-
SIGINT
orCTRL+C
-
SIGQUIT
orCTRL+BREAK
(Windows) -
SIGTERM
(The signal sent by Kubernetes to the container to terminate the poddocker stop
)
Before
.Net 6
, graceful shutdown operation could not be performed when theSIGTERM
signal came. As a workaround for this situation, on theConsoleLifetime
side, theSystem.AppDomain.ProcessExit
event would be listened to, theProcessExit
thread would be stopped, and the host would be waited to stop.
The process that takes place during the graceful shut down operation is shown above. In order:
- A
SIGTERM
signal comes from Kubernetes or the user. As a result of this signal, theStopApplication()
method underIHostApplicationLifetime
is triggered and theApplicationStopping
event is fired. Previously, theIHost.WaitForShutdownAsync
method was listening to this specified event, and because the event is triggered, it unblocks theMain
execution. - The
IHost.StopAsync()
method is triggered, and from within this method,IHostedService.StopAsync()
is triggered, ensuring that each hosted service is stopped and then firing events indicating that it has been stopped. - Finally,
IHost.WaitForShutdownAsync
is completed, the code blocks that the application needs to execute are executed, and the graceful shut-down operation is performed.
It is possible to set the ShutdownTimeout
by configuring the Host. This value is the timeout period specified for the IHost.StopAsync()
method and its default value is 30 seconds.
Setting Up a Kubernetes Cluster with Kind
With the Kind
as a OpenSource project, it is possible to quickly set up Kubernetes clusters locally. For this article, the Kubernetes cluster where the developed application will be published was created with the Kind
.
First, we install the Kind
cli locally with the command specified below:
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.23.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe
Don't forget to add the directory you specify to the
Path
environment variable, so that you don't have to go to the directory where the cli is every time you want to perform operations.
Then, to set up a cluster with 3 worker nodes, the yaml file specified below is created.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
-
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8081
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
- role: worker
- role: worker
- role: worker
The yaml file I provided above; it is observed that there are definitions for each node role and forwarding requests sent to ports 8081 and 8443 locally to the ingress controller we will set up on the cluster.
kind create cluster --config .\kind-cluster.yaml
I'm setting up the cluster with the command I provided above
kind create cluster --config .\kind-config\kind-cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.30.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Then I install the ingress controller with the command I provided below:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
This manifest file prepared specifically for
Kind
and contains some patches and settings for forward operation.
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
After these operations are completed, the cluster is now ready to use.
Creating the Web API
We are creating a web api project with .Net 8
and we are modifying Program.cs
file as specified below.
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
}
app.UseAuthorization();
app.MapControllers();
app.Run();
}
}
Unnecessary services have been removed
I'm renaming the Controller and Action names as specified below. In the previous section, I mentioned that the default value of ShutdownTimeout
is 30 seconds. I'm making a change to wait for 35 seconds within the action and return a response for the operation to get an error.
[ApiController]
[Route("[controller]")]
public class PingPongController : ControllerBase
{
private readonly ILogger<PingPongController> _logger;
public PingPongController(ILogger<PingPongController> logger)
{
_logger = logger;
}
[HttpGet(Name = "Ping")]
public async Task<IActionResult> Ping()
{
await Task.Delay(TimeSpan.FromSeconds(35));
return Ok("Pong");
}
}
I'm creating the deployment manifest for deploying application
apiVersion: apps/v1
kind: Deployment
metadata:
name: pingapi-deployment
labels:
app: pingapi
spec:
replicas: 1
selector:
matchLabels:
app: pingapi
template:
metadata:
labels:
app: pingapi
spec:
containers:
- name: pingapi
image: pingapi:0.1
ports:
- containerPort: 8080
resources:
limits:
cpu: "0.5"
memory: "150Mi"
requests:
cpu: "0.1"
memory: "150Mi"
In the images released with
.Net 8
, the default port on which the application runs has been changed from80
to8080
. For this reason, the port information is set to8080
in the manifest. İlgili doc
I'm creating Dockerfile
as shown below;
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
WORKDIR /app
COPY ./Publish .
ENTRYPOINT ["dotnet", "Example.Ping.Api.dll"]
Deploying the Service and Performing the Test
First, I'm publishing my application under the Publish
directory
dotnet publish -o ./Publish
Then I'm building the image;
docker build -t pingapi:0.1 .
Before applying the deployment manifest on the cluster, I transfer the tagged image to each node using the command specified below. Otherwise, I would need to push this image to an image repository and pull it from the cluster. Kind provides convenience here.
kind load docker-image pingapi:0.1
kind load docker-image pingapi:0.1
Image: "pingapi:0.1" with ID "sha256:2e5cfec8e475ed2d2ccfd0ae9753a7f5feda0e01de0081718ab678203d25edcf" not yet present on node "kind-worker3", loading...
Image: "pingapi:0.1" with ID "sha256:2e5cfec8e475ed2d2ccfd0ae9753a7f5feda0e01de0081718ab678203d25edcf" not yet present on node "kind-worker", loading...
Image: "pingapi:0.1" with ID "sha256:2e5cfec8e475ed2d2ccfd0ae9753a7f5feda0e01de0081718ab678203d25edcf" not yet present on node "kind-control-plane", loading...
Image: "pingapi:0.1" with ID "sha256:2e5cfec8e475ed2d2ccfd0ae9753a7f5feda0e01de0081718ab678203d25edcf" not yet present on node "kind-worker2", loading...
I'm applying the deployment with the command specified below;
kubectl apply -f .\Kubernetes\deployment.yaml
I'm forwarding the container's 8080
port with the command specified below;
kubectl port-forward deployment/pingapi-deployment -n default 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Now we can start test. First, I send a request with the httpstat http://localhost:8080/pingpong/ping
command, then I run the kubectl delete pod/pingapi-deployment-5c78cbdfc-bfd9b
command to delete the pod. As expected, due to both the terminationGracePeriodSeconds
being the default value of 30 seconds on the Kubernetes side (as we didn't specify it in the deployment manifest) and the Host
object's ShutdownTimeout
being the default value of 30 seconds on the .Net side, the connection is terminated for the request we sent, as seen in the error.
A similar scenario can be tested by deploying a new image version. The purpose here is to send a
SIGTERM
signal to the pod, so the test process was carried out with a pod deletion operation.
To overcome this situation, I first configure the Host
on the application side, then update the value called spec.terminationGracePeriodSeconds
under the deployment manifest.
The new version of the Program.cs
file is created as specified below;
namespace Example.Ping.Api;
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
builder.Host.ConfigureHostOptions(opts => opts.ShutdownTimeout = TimeSpan.FromSeconds(45));
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
}
app.UseAuthorization();
app.MapControllers();
app.Run();
}
}
The new version of the deployment.yaml
file is as follows;
apiVersion: apps/v1
kind: Deployment
metadata:
name: pingapi-deployment
labels:
app: pingapi
spec:
replicas: 1
selector:
matchLabels:
app: pingapi
template:
metadata:
labels:
app: pingapi
spec:
containers:
- name: pingapi
image: pingapi:0.2
ports:
- containerPort: 8080
resources:
limits:
cpu: "0.5"
memory: "150Mi"
requests:
cpu: "0.1"
memory: "150Mi"
terminationGracePeriodSeconds: 50
We update the newly built image as
pingapi:0.2
in the deployment manifest.
After making the specified update, we apply the test again, it is observed that no problems are encountered;
Delay Between Kubernetes Ingress and Control Plane
Despite the adjustments we made above, there is still a possibility of encountering errors for a few requests, especially during the RollingUpdate
phase for heavily used services. This is due to the delay between the ingress and the control plane.
The reason for this is that ingress and Kubernetes control plane are different entities within Kubernetes and perform their operations independently. When Kubernetes wants to terminate a pod, the control plane removes the pods to be terminated from the Service, and the ingress becomes aware of this and stops routing requests to the pods to be terminated. There is a delay between these two operations because the ingress updates these changes made in the Services on its side at certain intervals. This situation causes requests to be forwarded to pods in terminating
status for a small number of requests.
On the .Net side, after IHost.StopAsync()
is called, the application does not allow new requests to come through already open connections and also does not allow opening a new connection. Therefore, this delay means that new requests can come when the IHost.StopAsync()
operation starts. This will cause an error for the requesting party.
As a solution to this situation, the method I will use below has been recommended by the dotnet team. ref
In previous sections, it was mentioned that IHostLifetime
controls when the application will start or stop. Therefore, we first implement a new IHostLifetime
.
using System.Runtime.InteropServices;
public class DelayedShutdownHostLifetime : IHostLifetime, IDisposable
{
private IHostApplicationLifetime _applicationLifetime;
private TimeSpan _delay;
private IEnumerable<IDisposable>? _disposables;
public DelayedShutdownHostLifetime(IHostApplicationLifetime applicationLifetime, TimeSpan delay) {
_applicationLifetime = applicationLifetime;
_delay = delay;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
public Task WaitForStartAsync(CancellationToken cancellationToken)
{
_disposables = new IDisposable[]
{
PosixSignalRegistration.Create(PosixSignal.SIGINT, HandleSignal),
PosixSignalRegistration.Create(PosixSignal.SIGQUIT, HandleSignal),
PosixSignalRegistration.Create(PosixSignal.SIGTERM, HandleSignal)
};
return Task.CompletedTask;
}
protected void HandleSignal(PosixSignalContext ctx)
{
ctx.Cancel = true;
Task.Delay(_delay).ContinueWith(t => _applicationLifetime.StopApplication());
}
public void Dispose()
{
foreach (var disposable in _disposables ?? Enumerable.Empty<IDisposable>())
{
disposable.Dispose();
}
}
}
When the operation is examined, a registration operation has been performed for certain POSIX
signals when the application first starts up. In this way, when one of these signals comes to the application side, a delay is defined with Task.Delay
before starting the IHostApplicationLifetime.StopApplication()
operation. Thanks to this delay, the graceful shut-down process will not start immediately when the specified signals are received, and new requests coming to the pod can be handled during the specified delay.
As the final step, I register the newly created IHostLifetime
implementation under Program.cs
.
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
builder.Host.ConfigureHostOptions(opts => opts.ShutdownTimeout = TimeSpan.FromSeconds(45));
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSingleton<IHostLifetime>(sp =>
new DelayedShutdownHostLifetime(sp.GetRequiredService<IHostApplicationLifetime>(), TimeSpan.FromSeconds(5)));
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
}
app.UseAuthorization();
app.MapControllers();
app.Run();
}
}
It was mentioned that
IHostApplicationLifetime
andIHostLifetime
are registered as services within theHostApplicationBuilder.Build()
method. Here,IHostLifetime
is registered again to ensure it's used with the specifiedDelayedShutdownHostLifetime
implementation.
With the final operation, a seamless deployment process has been achieved on the Kubernetes side. Thank you for reading :)
You can check application code
References
https://learn.microsoft.com/en-us/dotnet/core/extensions/workers
https://learn.microsoft.com/en-us/dotnet/core/extensions/generic-host?tabs=appbuilder#host-shutdown
https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.ihostlifetime?view=net-8.0
https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes
This content originally appeared on DEV Community and was authored by Tugay Ersoy
Tugay Ersoy | Sciencx (2024-08-04T13:59:27+00:00) Zero-Downtime Deployment for ASP.NET Applications in Kubernetes. Retrieved from https://www.scien.cx/2024/08/04/zero-downtime-deployment-for-asp-net-applications-in-kubernetes/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.