Documents
Workflow Scheduling
Workflow Scheduling
Type
Topic
Status
Published
Created
Feb 24, 2026
Updated
Apr 27, 2026
Created by
Dosu Bot
Updated by
Dosu Bot

Workflow Scheduling#

Workflow Scheduling is a core system feature of DBOS Transact Python that enables periodic, automated execution of workflows according to cron-based schedules. The system provides database-backed schedule persistence, ensuring schedules survive application restarts and remain consistent across distributed deployments. This architecture distinguishes DBOS scheduling from traditional cron implementations by treating schedules as durable, first-class database entities that can be dynamically created, modified, and deleted at runtime.

DBOS supports two distinct scheduling approaches: a decorator-based static scheduling mechanism using @DBOS.scheduled and a dynamic runtime scheduling API centered on DBOS.create_schedule. The dynamic approach represents the recommended and more flexible method, as it provides comprehensive CRUD operations for schedule management, including pause/resume capabilities, backfill support for missed executions, and programmatic schedule control that can respond to application state changes.

The scheduling system leverages croniter for parsing cron expressions with optional seconds-precision support, enabling schedules as granular as once per second. To ensure reliable operation in distributed environments, the system implements multiple coordination mechanisms: randomized jitter up to 10% of sleep time (capped at 10 seconds) prevents thundering herd scenarios, while deterministic idempotency keys prevent duplicate execution even when multiple application instances attempt to schedule the same workflow simultaneously.

Architecture and Implementation#

Database Schema#

The workflow scheduling system stores schedule definitions in the system database's workflow_schedules table, establishing schedules as persistent, queryable database records. This table structure supports both module-level workflow functions and static class methods, enabling flexible schedule organization while maintaining referential integrity with the workflow registry.

The table schema comprises eleven columns:

  • schedule_id (TEXT, primary key): Auto-generated UUID serving as the immutable identifier for the schedule
  • schedule_name (TEXT, unique, not null): User-defined unique name for human-readable schedule identification and management
  • workflow_name (TEXT, not null): Fully-qualified name of the workflow function to invoke, matching the function's registration name in DBOS
  • workflow_class_name (TEXT, nullable): Fully-qualified class name for static class methods; NULL for module-level functions
  • schedule (TEXT, not null): Cron expression defining execution timing, supporting both standard 5-field and extended 6-field (with seconds) formats
  • status (TEXT, not null, default "ACTIVE"): Current operational status, either "ACTIVE" for firing schedules or "PAUSED" for temporarily disabled schedules
  • context (TEXT, not null): Serialized JSON representation of the context object passed to each workflow invocation, enabling parameterized scheduling
  • last_fired_at (TEXT, nullable): ISO 8601 timestamp recording when the schedule last executed successfully, used for automatic backfill calculations
  • automatic_backfill (BOOLEAN, not null, default false): When true, the scheduler automatically backfills missed executions on startup since the last_fired_at time
  • cron_timezone (TEXT, nullable): IANA timezone name (e.g., "America/New_York") for evaluating cron expressions; NULL indicates UTC
  • queue_name (TEXT, nullable): Name of a declared queue to enqueue scheduled workflows to; NULL indicates the internal queue (_dbos_sys_internal)

The database-backed design ensures schedules persist across application restarts, eliminating the need for schedule reregistration after deployment. Additionally, changes to schedules are immediately reflected across all workers in distributed deployments, as the polling mechanism ensures each worker synchronizes with the database state within the configured polling interval.

Scheduler Loop and Execution Model#

The scheduling system employs a hybrid architecture combining a centralized polling loop with distributed per-schedule execution threads. The dynamic scheduler loop polls the database for schedule updates at regular intervals, defaulting to 30 seconds but configurable via the scheduler_polling_interval_sec runtime configuration parameter. This design enables responsive schedule updates while minimizing database load.

Within the scheduler loop, each active schedule spawns a dedicated _ScheduleThread instance that manages the schedule's execution lifecycle independently. This thread-per-schedule model provides isolation between schedules, preventing delays or failures in one schedule from affecting others. Each thread executes the following sequence:

  1. Cron Parsing: The thread parses the cron expression using croniter with seconds support enabled, creating an iterator that calculates future execution times
  2. Time Calculation: The iterator computes the next scheduled execution time based on the cron expression and the current time
  3. Jittered Sleep: The thread sleeps until the scheduled time, applying randomized jitter to prevent thundering herd conditions in distributed deployments
  4. Workflow Enqueueing: Upon waking, the thread enqueues the workflow execution via the internal queue with a deterministic workflow ID

The workflow ID follows the format sched-{schedule_name}-{ISO8601_scheduled_time}, where the schedule name and ISO 8601-formatted scheduled time uniquely identify each execution. This deterministic ID generation ensures exactly-once execution semantics, as the DBOS workflow execution system prevents duplicate workflow IDs through database constraints.

Version Routing: Scheduled workflows are always enqueued to the latest application version, not the version of the executor that created the schedule. This ensures that scheduled executions benefit from the most recent application code, even when older executors are still running. The scheduler queries the system database for the latest version at enqueue time and targets that version for workflow execution.

Distributed Coordination#

Reliable schedule execution across multiple application instances requires careful coordination to prevent duplicate work while maintaining high availability. The scheduling system addresses distributed coordination through three complementary mechanisms:

  1. Randomized Jitter: To prevent all workers from simultaneously attempting to enqueue a scheduled workflow (the thundering herd problem), the system applies random delay up to 10% of the calculated sleep time, capped at 10 seconds. This jitter spreads out enqueue attempts across time, reducing contention on the workflow initialization transaction.

  2. Deterministic Idempotency Keys: Each scheduled execution uses a workflow ID derived from the schedule name and scheduled time, producing identical IDs across all workers for the same scheduled event. The DBOS workflow system's uniqueness constraint on workflow IDs ensures that only one worker successfully creates each workflow execution, naturally serializing concurrent attempts.

  3. Pre-Enqueue Status Verification: Before enqueueing a workflow, each scheduler thread verifies the workflow hasn't already been created by checking for an existing workflow with the computed ID. This optimization reduces unnecessary enqueue attempts and associated database transactions for executions already scheduled by another worker.

Together, these mechanisms ensure that scheduled workflows execute exactly once even in the presence of multiple concurrent schedulers, while tolerating worker failures through redundancy.

Dynamic Runtime Scheduling#

Dynamic runtime scheduling represents the primary interface for creating and managing workflow schedules in DBOS. Unlike decorator-based scheduling, which defines schedules statically in code, the dynamic API enables applications to create, modify, and delete schedules programmatically in response to runtime conditions, user requests, or external events. This approach is particularly valuable for multi-tenant applications that require per-customer scheduling or systems where schedule parameters depend on configuration data.

Creating Schedules#

Schedules are created using the DBOS.create_schedule() method, which accepts seven parameters:

  • schedule_name (str): A unique identifier for the schedule. This name must be unique across all schedules in the system and is used for subsequent management operations
  • workflow_fn (ScheduledWorkflow): A reference to the workflow function to invoke. The function must be decorated with @DBOS.workflow() and registered with DBOS before schedule creation
  • schedule (str): A cron expression defining the execution timing, supporting both standard 5-field format (minute, hour, day of month, month, day of week) and extended 6-field format (adding seconds as the first field)
  • context (Any, optional): A JSON-serializable object passed as the second argument to each workflow invocation, enabling parameterized scheduling. Defaults to None
  • automatic_backfill (bool, optional): When True, the scheduler automatically backfills missed executions on startup, executing any schedules that should have fired since last_fired_at. Defaults to False
  • cron_timezone (Optional[str], optional): IANA timezone name (e.g., "America/New_York") for evaluating the cron expression. When None, uses UTC. This allows schedules to respect local time zones (e.g., "9 AM New York time") rather than UTC. Defaults to None
  • queue_name (Optional[str], optional): Name of a declared queue to enqueue scheduled workflows to. If None, uses the internal queue (_dbos_sys_internal). The queue must be declared before being used in a schedule, otherwise a DBOSException is raised. Defaults to None

Scheduled workflows must conform to the signature def workflow_fn(scheduled_time: datetime, context: Any), where the first argument receives the scheduled execution time and the second receives the custom context object. This signature differs from decorator-based schedules, providing greater flexibility for passing application-specific data to each execution.

from datetime import datetime
from typing import Any
from dbos import DBOS

@DBOS.workflow()
def periodic_task(scheduled_time: datetime, context: Any):
    customer_id = context.get("customer_id") if context else None
    DBOS.logger.info(f"Running task scheduled for {scheduled_time}, customer {customer_id}")
    # Perform scheduled work...

DBOS.create_schedule(
    schedule_name="hourly-task",
    workflow_fn=periodic_task,
    schedule="0 * * * *", # Every hour on the hour
    context={"customer_id": 123, "priority": "high"},
    automatic_backfill=True, # Backfill missed executions on startup
    cron_timezone="America/New_York" # Execute at the top of each hour in New York time
)

Before persisting the schedule, the system performs validation: it verifies the cron expression is parseable and ensures the workflow function is registered in the DBOS registry. If validation fails, the method raises a DBOSException describing the error. When called from within a workflow, create_schedule records the operation as a step, making schedule creation part of the workflow's durable execution history.

Schedule Management Operations#

The dynamic scheduling API provides a complete set of CRUD operations for schedule lifecycle management, enabling applications to query, modify, and remove schedules at runtime:

  • DBOS.list_schedules(): Returns a list of all registered schedules with optional filtering by status (e.g., "ACTIVE", "PAUSED"), workflow name, or schedule name prefix. Each returned schedule is a dictionary containing all schedule metadata including the deserialized context object and the queue name.

  • DBOS.get_schedule(name): Retrieves a specific schedule by name, returning None if no schedule with that name exists. The returned schedule dictionary includes the queue name (which will be None if not specified). This method is useful for checking schedule existence and configuration before performing operations.

  • DBOS.pause_schedule(name): Pauses an active schedule without deleting its configuration. Paused schedules do not fire, but their definitions remain in the database for later resumption. This operation updates the schedule's status field to "PAUSED", causing the scheduler loop to stop the associated execution thread during its next polling cycle.

  • DBOS.resume_schedule(name): Resumes a paused schedule, changing its status back to "ACTIVE" and causing the scheduler to restart execution. The schedule begins firing again from the next computed cron time, not retroactively for missed executions during the pause period.

  • DBOS.delete_schedule(name): Permanently removes a schedule from the database. This operation is irreversible and stops all future executions. The method is idempotent: deleting a non-existent schedule is a no-op.

All schedule management operations can be called from within workflows, where they are recorded as durable steps. Each operation also has an async variant (e.g., create_schedule_async, list_schedules_async) for use in async workflows, providing consistent scheduling capabilities across both synchronous and asynchronous execution contexts.

Atomic Schedule Updates#

For applications that maintain a fixed set of schedules defined in code but require them to persist in the database, DBOS.apply_schedules() provides declarative schedule management. This method atomically creates or replaces multiple schedules in a single operation, making it ideal for maintaining static schedule definitions in code while storing them durably in the database.

The method accepts a list of schedule specifications, each containing schedule_name, workflow_fn, schedule (cron expression), context, and optionally automatic_backfill, cron_timezone, and queue_name:

DBOS.apply_schedules([
    {
        "schedule_name": "customer-sync",
        "workflow_fn": sync_customer_data,
        "schedule": "*/10 * * * *", # Every 10 minutes
        "context": {"region": "us-east-1"},
        "automatic_backfill": True,
        "cron_timezone": "America/New_York"
    },
    {
        "schedule_name": "daily-report",
        "workflow_fn": generate_report,
        "schedule": "0 8 * * *", # Daily at 8 AM
        "context": None,
        "cron_timezone": "America/Los_Angeles"
    },
])

The operation is atomic and validates all schedules before making any changes: it checks each cron expression for validity and verifies each workflow function is registered. If any validation fails, the entire operation is aborted. Within a database transaction, the method deletes any existing schedule with the same name and creates the new schedule definition, ensuring the final database state exactly matches the provided specification.

This declarative approach is particularly useful during application initialization, allowing schedules to be defined alongside other application configuration while maintaining the benefits of database persistence. However, apply_schedules cannot be called from within a workflow and is intended for use in application startup code.

Backfilling and Manual Triggering#

Two specialized operations enable non-standard schedule execution patterns: backfilling to handle missed executions and manual triggering for ad-hoc runs.

Automatic Backfill#

When the automatic_backfill parameter is set to True during schedule creation, the scheduler automatically recovers missed executions when it starts up. This feature is particularly useful for ensuring consistent task execution after planned or unplanned downtime.

On startup, for each schedule with automatic backfill enabled:

  1. The scheduler checks if last_fired_at is set (indicating at least one prior execution)
  2. If last_fired_at exists and is in the past, the scheduler automatically invokes backfill_schedule() with the time range from last_fired_at to the current time
  3. All scheduled executions that should have occurred during the downtime are enqueued
  4. The deterministic workflow ID format ensures any executions that actually ran are skipped

After successful execution, the scheduler updates last_fired_at to track when each schedule fires, enabling future backfills to calculate the correct time range.

# Enable automatic backfill for critical schedules
DBOS.create_schedule(
    schedule_name="critical-hourly-sync",
    workflow_fn=sync_data,
    schedule="0 * * * *", # Every hour
    context=None,
    automatic_backfill=True # Catch up on missed executions after downtime
)

Manual Backfilling#

DBOS.backfill_schedule() enqueues all workflow executions that would have occurred between specified start and end times according to the schedule's cron expression. This capability is essential for recovering from application downtime, schedule creation delays, or deliberate schedule pauses. The method accepts three parameters: the schedule name and datetime objects for the start (exclusive) and end (exclusive) of the backfill window. Both timezone-aware and timezone-naive datetime objects are supported; timezone-naive datetimes are treated as UTC.

from datetime import datetime, timezone

# Backfill all missed hourly executions from January 1-2, 2025
DBOS.backfill_schedule(
    "hourly-task",
    start=datetime(2025, 1, 1, tzinfo=timezone.utc),
    end=datetime(2025, 1, 2, tzinfo=timezone.utc),
)

Critically, backfilling uses the same deterministic workflow ID format as the live scheduler: sched-{schedule_name}-{scheduled_time}. This means already-executed times are automatically skipped due to the workflow ID uniqueness constraint, making backfill idempotent and safe to retry. If some executions in the window already occurred while the application was running, backfill enqueues only the truly missed executions.

The method returns a list of WorkflowHandle objects for the enqueued executions, allowing callers to await their completion or check their status. Backfill cannot be called from within a workflow to avoid complicating workflow recovery semantics.

Manual Triggering#

DBOS.trigger_schedule() immediately enqueues a single execution of a scheduled workflow at the current time, bypassing the cron schedule. This operation is useful for testing schedules, responding to urgent events, or providing user-initiated execution of scheduled tasks.

# Trigger the schedule immediately
handle = DBOS.trigger_schedule("hourly-task")
result = handle.get_result() # Wait for completion

Unlike backfill, triggered executions use a distinct workflow ID format: sched-{schedule_name}-trigger-{current_time}. The -trigger- segment distinguishes manual triggers from cron-scheduled executions in workflow history and prevents conflicts with scheduled execution IDs. The method returns a single WorkflowHandle for the triggered execution.

Like backfill, triggering cannot be called from within a workflow and is intended for use in request handlers, management scripts, or other external contexts.

Static Decorator-Based Scheduling#

The decorator-based scheduling approach using @DBOS.scheduled represents the original scheduling mechanism in DBOS and is now deprecated in favor of dynamic scheduling. However, it remains supported for backward compatibility and continues to function alongside the dynamic scheduling system.

Decorator-based schedules differ from dynamic schedules in several key aspects. The schedule is defined at compile time through a decorator applied to the workflow function:

from datetime import datetime
from dbos import DBOS

@DBOS.scheduled('* * * * *') # Run once every minute
@DBOS.workflow()
def example_scheduled_workflow(scheduled_time: datetime, actual_time: datetime):
    DBOS.logger.info(f"Scheduled for {scheduled_time}, actually started at {actual_time}")

The function signature for decorator-based schedules is (scheduled_time: datetime, actual_time: datetime), contrasting with the (scheduled_time: datetime, context: Any) signature used by dynamic schedules. The first parameter receives the time the execution was scheduled for, while the second receives the actual start time, allowing workflows to detect and handle delays between scheduled and actual execution times.

Under the hood, the decorator registers a background poller thread during DBOS initialization. This thread runs independently of the database-backed dynamic scheduler, executing its own cron loop and enqueueing workflows via the internal queue. The workflow ID format for decorator schedules uses the function name rather than a schedule name: sched-{workflow_name}-{scheduled_time}.

Decorator schedules have significant limitations compared to dynamic schedules:

  • No Persistence: Schedules exist only in code and do not persist in the database. Removing or commenting out the decorator stops the schedule.
  • No Runtime Management: Decorator schedules cannot be paused, resumed, or deleted through the schedule management API.
  • No Context Passing: The fixed function signature provides no mechanism for passing custom context data to each execution.
  • Limited Observability: Decorator schedules do not appear in list_schedules() results and cannot be queried through the API.

For new applications, dynamic scheduling should be preferred due to its superior flexibility, manageability, and integration with the database-backed system architecture.

Cron Schedule Syntax#

DBOS workflow schedules use cron expressions to define execution timing, leveraging croniter for parsing with support for both standard and extended formats. The system accepts cron expressions with 5 or 6 fields, where 6-field expressions include seconds as the first field, enabling sub-minute scheduling precision.

The cron expression format is:

┌────────────── second (optional, 0-59)
│ ┌──────────── minute (0-59)
│ │ ┌────────── hour (0-23)
│ │ │ ┌──────── day of month (1-31)
│ │ │ │ ┌────── month (1-12)
│ │ │ │ │ ┌──── day of week (0-6, Sunday=0)
│ │ │ │ │ │
* * * * * *

Each field accepts:

  • Wildcard (*): Matches any value
  • Specific values: Single numbers (e.g., 5)
  • Ranges: Two numbers separated by a hyphen (e.g., 1-5)
  • Lists: Comma-separated values (e.g., 1,3,5)
  • Steps: Ranges or wildcards with /n for every nth value (e.g., */15 for every 15 units)

Common scheduling patterns:

ExpressionDescriptionFrequency
* * * * *Every minute1,440 times/day
*/5 * * * *Every 5 minutes288 times/day
0 * * * *Hourly at :0024 times/day
0 0 * * *Daily at midnightOnce/day
0 9-17 * * 1-59 AM-5 PM on weekdays9 times/day, Mon-Fri
30 */2 * * * *Every 2 hours at :30 past12 times/day
0 0 1 * *Monthly on the 1st at midnightOnce/month

Seconds-precision scheduling enables high-frequency tasks:

# Execute every 30 seconds
DBOS.create_schedule(
    schedule_name="high-frequency-monitor",
    workflow_fn=monitor_system,
    schedule="*/30 * * * * *", # Note the 6 fields
    context=None
)

For comprehensive cron syntax documentation, see crontab.guru or the GitLab cron syntax guide. Note that DBOS's seconds support is an extension beyond standard Unix cron.

Use Cases and Examples#

Workflow scheduling in DBOS enables a variety of application patterns, from simple periodic maintenance to sophisticated multi-tenant scheduling systems.

Single-Schedule Application#

A basic application might use scheduling for periodic maintenance tasks:

from datetime import datetime
from typing import Any
from dbos import DBOS

@DBOS.workflow()
def cleanup_old_data(scheduled_time: datetime, context: Any):
    """Remove records older than 30 days."""
    DBOS.logger.info(f"Starting cleanup at {scheduled_time}")
    # Perform cleanup operations...

# Create the schedule during application initialization
DBOS.create_schedule(
    schedule_name="daily-cleanup",
    workflow_fn=cleanup_old_data,
    schedule="0 2 * * *", # Daily at 2 AM
    context=None
)

Multi-Tenant Scheduling#

Applications serving multiple customers can create per-customer schedules dynamically:

@DBOS.workflow()
def sync_customer_data(scheduled_time: datetime, customer_id: str):
    """Synchronize data for a specific customer."""
    DBOS.logger.info(f"Syncing data for customer {customer_id}")
    # Perform customer-specific synchronization...

def register_customer(customer_id: str, sync_frequency: str):
    """Create a schedule when a customer registers."""
    DBOS.create_schedule(
        schedule_name=f"customer-{customer_id}-sync",
        workflow_fn=sync_customer_data,
        schedule=sync_frequency, # e.g., "0 * * * *" for hourly
        context=customer_id
    )
    DBOS.logger.info(f"Created schedule for customer {customer_id}")

def unregister_customer(customer_id: str):
    """Remove the schedule when a customer unregisters."""
    DBOS.delete_schedule(f"customer-{customer_id}-sync")

Declarative Schedule Configuration#

Applications with a fixed set of schedules can define them declaratively:

from dbos import DBOS

# Define all schedules in one place
ALL_SCHEDULES = [
    {
        "schedule_name": "hourly-metrics",
        "workflow_fn": collect_metrics,
        "schedule": "0 * * * *",
        "context": {"type": "metrics"}
    },
    {
        "schedule_name": "daily-reports",
        "workflow_fn": generate_reports,
        "schedule": "0 8 * * *",
        "context": {"type": "reports"}
    },
    {
        "schedule_name": "weekly-maintenance",
        "workflow_fn": perform_maintenance,
        "schedule": "0 3 * * 0", # Sundays at 3 AM
        "context": {"type": "maintenance"}
    },
]

# Apply all schedules at startup
DBOS.launch()
DBOS.apply_schedules(ALL_SCHEDULES)

Relevant Code Files#

File PathDescription
dbos/_scheduler.pyCore scheduler implementation including _ScheduleThread, dynamic_scheduler_loop, backfill_schedule, and trigger_schedule
dbos/_scheduler_decorator.pyDecorator-based static scheduling implementation with @DBOS.scheduled decorator
dbos/_dbos.pyPublic API methods for schedule management (lines 1743-2071)
dbos/_sys_db.pyDatabase operations for schedules (lines 3062-3220) including create_schedule, list_schedules, pause_schedule, resume_schedule, and delete_schedule
dbos/_schemas/system_database.pyDatabase schema definition for workflow_schedules table (lines 200-210)
tests/test_scheduler.pyComprehensive test suite demonstrating CRUD operations, backfilling, triggering, pause/resume, and async support

Limitations and Constraints#

While the workflow scheduling system is comprehensive, developers should be aware of several architectural constraints and design decisions:

Instance Method Restriction#

Instance methods on DBOSConfiguredInstance classes cannot be scheduled. Only module-level functions and static class methods are supported for scheduling. This restriction exists because scheduled workflows execute without an explicit invocation context, and the system cannot determine which configured instance to use. For class-based workflows that need scheduling, define the workflow as a @staticmethod or extract it to a module-level function.

# ❌ This will fail - instance method cannot be scheduled
class MyService(DBOSConfiguredInstance):
    @DBOS.workflow()
    def periodic_task(self, scheduled_time: datetime, context: Any):
        pass

# ✅ This works - static method can be scheduled
class MyService:
    @staticmethod
    @DBOS.workflow()
    def periodic_task(scheduled_time: datetime, context: Any):
        pass

Function Signature Requirements#

Scheduled workflows must use specific function signatures depending on the scheduling method. Dynamic schedules require (datetime, Any) while decorator schedules require (datetime, datetime). Violating these signatures will cause runtime errors when the scheduler attempts to invoke the workflow. The type annotations are not strictly enforced but should match the expected signatures for clarity.

Workflow Registration Requirement#

Scheduled workflows must be decorated with @DBOS.workflow() and registered with DBOS before creating a schedule. Attempting to schedule an unregistered function will raise a DBOSException. Ensure workflow decorators are applied and modules containing workflows are imported before calling schedule creation methods.

Invocation Context Restrictions#

The methods backfill_schedule, trigger_schedule, and apply_schedules cannot be called from within a workflow. These operations are designed for external control and are incompatible with workflow recovery semantics. Attempting to call them from a workflow will raise a DBOSException. Other schedule management operations (create_schedule, pause_schedule, etc.) can be called from workflows and are recorded as durable steps.

Concurrency and Overlap#

The scheduling system does not prevent overlapping executions. If a scheduled workflow execution takes longer than the interval between scheduled times, multiple instances of the workflow will run concurrently. Applications that require serial execution should implement coordination mechanisms within the workflow (e.g., database locks, semaphores) or adjust the schedule frequency to ensure adequate time for completion.

Timezone Handling#

By default, cron expressions are evaluated in UTC, and the scheduled_time parameter passed to workflows is timezone-aware with UTC timezone. However, schedules can specify a custom timezone using the cron_timezone parameter, which accepts IANA timezone names (e.g., "America/New_York", "Europe/London", "Asia/Tokyo").

When a timezone is specified, the cron expression is evaluated in that timezone, allowing schedules to respect local time zones and automatically adjust for daylight saving time changes:

# Schedule at 9 AM New York time, every weekday
DBOS.create_schedule(
    schedule_name="business-hours-task",
    workflow_fn=daily_report,
    schedule="0 9 * * 1-5", # 9 AM Monday-Friday
    context=None,
    cron_timezone="America/New_York" # Automatically adjusts for DST
)

The scheduled_time parameter passed to the workflow function is always in UTC, regardless of the cron_timezone setting. Applications that need to perform timezone-aware operations should convert this timestamp to the appropriate timezone within the workflow.

For backfill operations, timezone-aware datetime objects can be passed to backfill_schedule(), and the method will correctly convert them to the schedule's timezone for cron evaluation:

from datetime import datetime, timezone

# Backfill with timezone-aware times
DBOS.backfill_schedule(
    "business-hours-task",
    start=datetime(2025, 1, 1, tzinfo=timezone.utc),
    end=datetime(2025, 1, 2, tzinfo=timezone.utc),
)
  • Workflows: Scheduled workflows are standard DBOS workflows invoked automatically by the scheduler. Understanding workflow semantics, recovery, and durability is essential for effective schedule design.

  • Workflow Queues: Scheduled executions can be directed to custom declared queues using the queue_name parameter for concurrency management. By default, scheduled workflows use the internal queue (_dbos_sys_internal). Understanding queue behavior helps explain scheduling latency and execution ordering.

  • Application Versioning: Scheduled workflows always execute on the latest application version. The system provides APIs for managing application versions:

    • DBOS.list_application_versions() returns all versions, newest first
    • DBOS.get_latest_application_version() returns the latest version
    • DBOS.set_latest_application_version(version_name) updates a version's timestamp to make it latest

    The VersionInfo type is available in the public API for working with version metadata, containing version_id, version_name, version_timestamp, and created_at fields.

  • System Database: Schedule persistence and management relies on the DBOS system database. Database connectivity issues affect schedule execution and management operations.

  • Workflow IDs and Idempotency: The deterministic workflow ID format used by scheduling is a key mechanism for exactly-once execution semantics and deduplication.

  • DBOSConfiguredInstance: Class-based workflow organization has specific scheduling restrictions, requiring static methods rather than instance methods for scheduled workflows.

  • DBOS Client: External applications can manage schedules using the DBOS Client, which provides schedule management APIs that accept workflow names as strings rather than function references.

See Also#