SIGTERM signal to each instance being replaced. If your application doesn’t handle this signal, the process is force-killed after the termination grace period — any in-flight work (running workflows, processing queue messages, active requests) is lost.
This guide walks through configuring graceful shutdown so your workers finish their current work before exiting. While the examples use Temporal workers, the same pattern applies to any long-running process — RabbitMQ consumers, background job processors, or custom queue workers.
How it works
During a rolling deploy, Ryvn starts the new instance first. Once it’s ready, Ryvn sendsSIGTERM to the old instance. Your worker catches that signal, stops polling for new tasks, and drains any in-flight work before exiting cleanly. Ryvn waits up to terminationGracePeriodSeconds before force-killing the instance.
Step 1: Handle SIGTERM in your worker
Your application needs to listen forSIGTERM and initiate a clean shutdown. The key behavior is:
- Stop accepting new work — stop polling queues or accepting new requests
- Finish in-flight work — let running tasks, workflows, or activities complete
- Exit cleanly — exit the process once all work is drained
- Temporal (TypeScript)
- Generic Node.js
- Go
Temporal’s
Worker class has a built-in shutdown() method that stops the worker from picking up new tasks and allows in-flight workflows and activities to complete. The worker.run() promise resolves once the worker has fully drained.worker.ts
worker.shutdown() is a void method that signals the worker to stop. It does not return a promise. The worker.run() promise resolves once shutdown is complete and all in-flight work has drained.Step 2: Ensure the process receives SIGTERM
A common pitfall is wrapping your application in a shell script orbash -c "..." in your Dockerfile. When this happens, bash is PID 1 and your application is a child process — SIGTERM is sent to bash, which does not forward it to child processes by default.
There are two ways to fix this:
- Use exec form (recommended)
- Use tini
Use the exec form of If you need to run setup commands before starting your app, use an entrypoint script with
CMD so your application is PID 1 and receives signals directly:Dockerfile
exec so your application becomes PID 1:Dockerfile
docker-entrypoint.sh
Step 3: Set terminationGracePeriodSeconds
By default, Ryvn gives your instance 30 seconds to shut down after receivingSIGTERM. If your workers run tasks that take longer than 30 seconds, you need to increase this value to match your longest expected task duration.
- Dashboard
- GitOps
Navigate to Environments → select your environment → select the installation → Settings → Values, then set
terminationGracePeriodSeconds in your configuration.Autoscaling considerations
If you’re using autoscaling with Temporal or RabbitMQ triggers, the same graceful shutdown pattern applies during scale-down events. When autoscaling reduces the replica count, Ryvn terminates excess instances withSIGTERM — your workers need to drain their in-flight work before exiting.
The SIGTERM handler ensures that scale-down events don’t kill active work. Combined with an appropriate terminationGracePeriodSeconds, workers will finish their current tasks even as the installation scales down.
Checklist
Before deploying, verify the following:Signal handler
Your worker code listens for
SIGTERM and initiates a clean shutdown — stops accepting new work and drains in-flight tasks.Process receives signals
Your
Dockerfile uses exec form (CMD ["node", "..."]) or tini so that your application process is PID 1 and receives SIGTERM directly.Grace period is sufficient
terminationGracePeriodSeconds in your Ryvn installation config is set to at least the duration of your longest-running task.Common issues
Worker is killed immediately on deploy
Worker is killed immediately on deploy
Your process likely isn’t receiving
SIGTERM. Check your Dockerfile — if you’re using bash -c, the signal goes to bash, not your app. Switch to exec form or use tini. See Step 2.Worker shuts down but tasks are still lost
Worker shuts down but tasks are still lost
Your
terminationGracePeriodSeconds may be too short. Ryvn force-kills the instance after this period. Increase it to exceed your longest task duration. See Step 3.Worker hangs and never exits
Worker hangs and never exits
Your shutdown handler may not be calling
process.exit() after draining. For Temporal workers, make sure you call process.exit(0) after worker.run() resolves. For custom workers, ensure your drain-check logic eventually exits.Multiple workers in the same instance
Multiple workers in the same instance
If your instance runs multiple worker processes, each one needs its own
SIGTERM handler. Consider using tini as an init process so signals are forwarded to all children, or structure your code to shut down all workers from a single signal handler.