Skip to main content
Jobs are scheduled or on-demand tasks that run to completion, perfect for maintenance, backups, and periodic processing needs. Unlike servers, jobs execute on a schedule or manual trigger, complete their task, and terminate. They integrate with maintenance windows for coordinated operations.
Jobs run as Kubernetes CronJobs under the hood, providing reliable scheduling and execution with built-in retry mechanisms.

Installation Configuration

When you deploy a job service to an environment, it creates an installation. Each installation can run on a cron schedule, be triggered manually via Run Now, or both.

CronJob Configuration

Configure scheduling and concurrency behavior:
config: |
  schedule: "0 * * * *"        # Run every hour
  concurrencyPolicy: Forbid    # Prevent overlapping runs

Schedule (Optional)

The schedule field uses standard cron expression format to define when the job runs. When omitted, the job is deployed as a manual-only job that never runs automatically and can only be triggered via the Run Now button in the UI or the trigger-job CLI command.
# Every 5 minutes
schedule: "*/5 * * * *"

# Every hour at minute 0
schedule: "0 * * * *"

# Daily at midnight
schedule: "0 0 * * *"

# Every Monday at 2 AM
schedule: "0 2 * * 1"

# First day of each month at midnight
schedule: "0 0 1 * *"

# Every weekday at 9 AM
schedule: "0 9 * * 1-5"

Concurrency Policy

The concurrencyPolicy field controls how the job handles overlapping runs. Defaults to Forbid.
  • Allow: Allows concurrent job runs. Multiple instances can run simultaneously.
  • Forbid: Skips new run if previous job is still running (default recommendation).
  • Replace: Cancels the currently running job and starts a new one.
Choose Forbid for most use cases to prevent resource contention and data consistency issues. Only use Allow when your job logic explicitly handles concurrent execution safely.

Advanced CronJob Settings (Optional)

Additional settings to fine-tune your job scheduling behavior:
config: |
  schedule: "0 * * * *"
  concurrencyPolicy: Forbid
  startingDeadlineSeconds: 200  # Optional: deadline for starting missed jobs
  successfulJobsHistoryLimit: 3 # Optional: number of successful jobs to keep (default: 3)
  failedJobsHistoryLimit: 1     # Optional: number of failed jobs to keep (default: 1)
  suspend: false                 # Optional: temporarily suspend job scheduling
Starting Deadline Seconds
  • Sets the deadline in seconds for starting the job if it misses its scheduled time
  • Useful for jobs that shouldn’t run if they’re too late (e.g., time-sensitive reports)
  • If not set, jobs will run regardless of how late they are
History Limits
  • successfulJobsHistoryLimit: Number of completed successful jobs to retain (default: 3)
  • failedJobsHistoryLimit: Number of completed failed jobs to retain (default: 1)
  • Helps manage cluster resources by automatically cleaning up old job pods
  • Set to 0 to disable history retention
Suspend
  • Set to true to temporarily pause job scheduling without deleting the CronJob
  • Useful for maintenance windows or debugging
  • Jobs won’t run while suspended, but the schedule remains configured

Resource Management (Required)

Define compute resources to ensure your jobs have sufficient capacity:
config: |
  schedule: "0 * * * *"
  concurrencyPolicy: Forbid
  resources:
    requests:
      cpu: "100m"       # Minimum CPU (100 millicores)
      memory: "128Mi"   # Minimum memory
    limits:
      cpu: "1"          # Maximum CPU (1 core)
      memory: "1Gi"     # Maximum memory
Set resource requests based on your job’s typical usage and limits based on maximum expected usage. This ensures your job gets scheduled reliably while preventing resource exhaustion.

Environment Variables

Jobs support the same environment variable configuration as servers. Define environment-specific configuration and secrets:
installations:
  - service: data-processor
    config: |
      schedule: "0 2 * * *"
      concurrencyPolicy: Forbid
      resources:
        requests:
          cpu: "500m"
          memory: "512Mi"
        limits:
          cpu: "2"
          memory: "2Gi"
    env:
      - key: ENVIRONMENT
        value: "production"
      - key: LOG_LEVEL
        value: "info"
      - key: DATABASE_URL
        isSecret: true  # Set value in platform UI
      - key: AWS_ACCESS_KEY_ID
        isSecret: true
Use variable groups to share secrets across installations instead of referencing them individually.
See our guide on environment variables for more details on secret management.

Complete Configuration Example

Here’s a comprehensive example of a job service with its installations:
kind: Service
metadata:
  name: data-cleanup
spec:
  type: job
  repo: myorg/data-cleanup
  build:
    context: .
    dockerfile: Dockerfile
---
kind: ServiceInstallation
metadata:
  name: data-cleanup-production
spec:
  service: data-cleanup
  environment: production
  config: |
    schedule: "0 3 * * *"  # Run daily at 3 AM
    concurrencyPolicy: Forbid
    resources:
      requests:
        cpu: "250m"
        memory: "256Mi"
      limits:
        cpu: "1"
        memory: "1Gi"
  env:
    - key: ENVIRONMENT
      value: "production"
    - key: RETENTION_DAYS
      value: "30"
    - key: DATABASE_URL
      isSecret: true
    - key: S3_BUCKET
      value: "prod-backups"
  variableGroups:
    - name: aws-prod-creds
---
kind: ServiceInstallation
metadata:
  name: data-cleanup-staging
spec:
  service: data-cleanup
  environment: staging
  config: |
    schedule: "0 */6 * * *"  # Run every 6 hours
    concurrencyPolicy: Replace
    resources:
      requests:
        cpu: "100m"
        memory: "128Mi"
      limits:
        cpu: "500m"
        memory: "512Mi"
  env:
    - key: ENVIRONMENT
      value: "staging"
    - key: RETENTION_DAYS
      value: "7"
    - key: DATABASE_URL
      isSecret: true

Build Configuration

Jobs support the same build configurations as servers:

Building from Source

kind: Service
metadata:
  name: batch-processor
spec:
  type: job
  repo: myorg/batch-processor
  build:
    context: .
    dockerfile: Dockerfile
    # Or use Nixpack
    buildpack: nixpack
    command: "npm run build"

Using Pre-built Images

kind: Service
metadata:
  name: database-backup
spec:
  type: job
  registry: docker.io
  imageName: myorg/db-backup

Advanced Configuration Example

Here’s an example using the advanced CronJob settings for a production-critical job that needs careful history management and deadline control:
kind: Service
metadata:
  name: critical-data-sync
spec:
  type: job
  repo: myorg/critical-data-sync
  build:
    context: .
    dockerfile: Dockerfile
---
kind: ServiceInstallation
metadata:
  name: critical-data-sync-production
spec:
  service: critical-data-sync
  environment: production
  config: |
    schedule: "*/15 * * * *"      # Every 15 minutes
    concurrencyPolicy: Forbid     # Never allow overlapping runs
    startingDeadlineSeconds: 120  # Skip if 2+ minutes late
    successfulJobsHistoryLimit: 10 # Keep more history for audit
    failedJobsHistoryLimit: 5     # Keep failed jobs for debugging
    suspend: false                 # Active scheduling
    resources:
      requests:
        cpu: "500m"
        memory: "1Gi"
      limits:
        cpu: "2"
        memory: "4Gi"
  env:
    - key: SYNC_TIMEOUT
      value: "600"  # 10 minute timeout
    - key: ALERT_ON_FAILURE
      value: "true"
This configuration ensures:
  • Jobs won’t pile up if one runs long (Forbid policy)
  • Stale jobs are skipped if system was down (startingDeadlineSeconds)
  • Sufficient history retained for troubleshooting and audit
  • Can be temporarily suspended without deletion if needed

Common Use Cases

Database Maintenance

schedule: "0 2 * * 0"  # Weekly on Sunday at 2 AM
concurrencyPolicy: Forbid

Data Processing

schedule: "*/30 * * * *"  # Every 30 minutes
concurrencyPolicy: Replace  # Cancel old job if still running

Report Generation

schedule: "0 8 * * 1-5"  # Weekdays at 8 AM
concurrencyPolicy: Forbid

Cleanup Tasks

schedule: "0 0 * * *"  # Daily at midnight
concurrencyPolicy: Allow  # Allow multiple cleanup jobs

Monitoring and Debugging

Viewing Job Executions

Monitor your job executions through the Ryvn platform:
  1. Navigate to your job service
  2. Click on the installation
  3. View the CronJob resource in the Explorer
  4. Check individual job run status and logs

Logs

Access logs from job executions through the Logs tab. Each job run creates a new pod with its own log stream.

Metrics

Monitor job performance and success rates through the Metrics tab:
  • Job execution duration
  • Success/failure rates
  • Resource utilization during runs

Managing Job Lifecycle

Manual-Only Jobs

To create a job that only runs when manually triggered, simply omit the schedule field:
config: |
  concurrencyPolicy: Forbid
  resources:
    requests:
      cpu: "250m"
      memory: "256Mi"
Manual-only jobs are deployed as suspended CronJobs and will never run automatically. They can only be executed via the Run Now button in the platform UI or the trigger-job CLI command. This is useful for:
  • Database migrations: Run once when needed, not on a schedule
  • Data backfills: Trigger manually after deploying a new feature
  • Incident response: Run a repair job only when needed
  • Ad-hoc processing: Tasks that don’t have a predictable schedule

Running Jobs On-Demand

Any job — whether scheduled or manual-only — can be triggered immediately using Run Now:
  • Platform UI: Open the job installation and click Run Now in the Actions dropdown
  • CLI: ryvn command trigger-job --environment <env> --installation <name>
This creates a one-off Job from the CronJob’s template. The new run appears alongside any scheduled runs in the resource explorer and follows the same concurrency policy, retry, and history settings.

Suspending Jobs

You can temporarily suspend a scheduled job without deleting it by setting suspend: true:
config: |
  schedule: "0 * * * *"
  concurrencyPolicy: Forbid
  suspend: true  # Job won't run until set back to false
This is useful for:
  • Planned maintenance: Suspend jobs during database upgrades
  • Debugging: Pause execution while investigating issues
  • Cost control: Temporarily disable non-critical jobs
  • Testing: Prevent jobs from running in staging environments
To resume a suspended job, simply update the configuration setting suspend: false and redeploy.
To make a scheduled job temporarily manual-only, you can set suspend: true and use Run Now when needed. Alternatively, remove the schedule field entirely to make it permanently manual-only.

Best Practices

  1. Set appropriate timeouts: Ensure your job completes within reasonable time to avoid blocking subsequent runs
  2. Handle failures gracefully: Implement proper error handling and cleanup in your job code
  3. Use Forbid concurrency policy: Unless you specifically need concurrent runs
  4. Monitor resource usage: Adjust resource limits based on actual usage patterns
  5. Test schedule expressions: Verify your cron expressions before deploying to production
  6. Implement idempotency: Ensure jobs can be safely re-run without side effects
  7. Use maintenance windows: Coordinate job schedules with maintenance windows
  8. Configure history limits: Balance between debugging needs and resource usage
  9. Set startingDeadlineSeconds: For time-sensitive jobs that shouldn’t run if too late
  10. Use suspend instead of delete: When temporarily disabling jobs to preserve configuration

Limitations

  • Jobs cannot expose network endpoints (use servers for HTTP services)
  • Maximum job execution time is limited by cluster policies (typically 6 hours)
  • Cron schedules have minute-level precision (no second-level scheduling)
  • Jobs run in UTC timezone by default