Building Multi-Tenant Durable Execution with Dynamic Workflows

By ● min read

Introduction

When we launched Workers eight years ago, it was a direct-to-developers platform. Over the years, we expanded the ecosystem so that platforms could enable their customers to ship code through multi-tenant applications. Today, we introduce Dynamic Workflows, bridging durable execution and dynamic deployment. This guide walks you through setting up Dynamic Workflows for your platform, allowing each tenant to have their own workflow code, running in isolated environments with durable execution capabilities.

Building Multi-Tenant Durable Execution with Dynamic Workflows
Source: blog.cloudflare.com

What You Need

Step-by-Step Guide

Step 1: Set Up Dynamic Workers for Compute

Dynamic Workers allow you to inject code at runtime and get an isolated, sandboxed Worker in milliseconds. This is the foundation for per-tenant compute. Use the Dynamic Workers API to upload or reference tenant-specific code. For example:

import { DynamicWorker } from '@cloudflare/dynamic-worker';

const myWorker = new DynamicWorker({
  code: tenantCode, // TypeScript string from your tenant
  bindings: { storage: ... }
});
const response = await myWorker.fetch(request);

This ensures each tenant runs in its own isolated context, on the same machine, with single-digit millisecond startup.

Step 2: Provision Per-Tenant Storage with Durable Object Facets

Durable Object Facets extend the same idea to storage. Each tenant gets its own SQLite database, spun up on demand. Use facets to isolate tenant data:

import { DurableObjectFacet } from '@cloudflare/durable-object-facet';

const tenantDb = new DurableObjectFacet({
  name: `tenant-${tenantId}`,
  sql: true // or 'kvs'
});
await tenantDb.sql.execute('CREATE TABLE ...');

The platform acts as a supervisor, managing storage lifecycle.

Step 3: Manage Version Control with Artifacts

Artifacts provide a Git-native, versioned filesystem for each tenant. Create millions of artifacts, one per agent, session, or tenant. Use the Artifacts API to store and retrieve code or data:

import { Artifact } from '@cloudflare/artifacts';

const tenantArtifact = new Artifact({
  name: `pipeline-${tenantId}`,
  files: { 'workflow.ts': tenantWorkflowCode }
});
await tenantArtifact.commit();

This gives each tenant a dedicated, versioned filesystem for their workflows.

Step 4: Define Dynamic Workflows for Each Tenant

Workflows is our durable execution engine. Instead of binding a single class per deploy, use Dynamic Workflows to load tenant-specific workflow code at runtime. Create a workflow handler that fetches the tenant's code from an Artifact and executes it:

import { Workflow } from '@cloudflare/workflows';

const tenantWorkflow = new Workflow({
  code: await artifact.get('workflow.ts'),
  steps: {
    'step1': async (event) => { ... },
    'step2': async (event) => { ... }
  }
});
const instance = await tenantWorkflow.start({ input });

The workflow engine turns your run(event, step) function into a program that survives failures, sleeps for hours, waits for events, and resumes exactly where it left off.

Building Multi-Tenant Durable Execution with Dynamic Workflows
Source: blog.cloudflare.com

Step 5: Wire Everything Together in a Platform Handler

Combine dynamic compute, storage, and workflows in a single request handler. When a tenant action triggers a workflow, look up the tenant's code, create a dynamic worker for pre-processing (optional), then start a Dynamic Workflow instance. Use Durable Object Facets for state and Artifacts for code storage. Example structure:

export default {
  async fetch(request, env) {
    const tenantId = extractTenant(request);
    const artifact = env.ARTIFACTS.get(`pipeline-${tenantId}`);
    const workflowCode = await artifact.get('workflow.ts');
    
    // optionally pre-process with dynamic worker
    // ...
    
    const workflow = new Workflow({
      code: workflowCode,
      bindings: { db: env.FACETS.get(`tenant-${tenantId}`) }
    });
    const instance = await workflow.start({ request: request.url });
    return new Response(JSON.stringify({ id: instance.id }));
  }
};

Step 6: Handle Workflow Lifecycle and Recovery

Workflows automatically persists execution state. To recover from failures, use the instance.result() method or set up webhooks. For long-running workflows, implement sleep and event wait:

await workflow.sleep('P1D'); // sleep 1 day
const event = await workflow.waitForEvent('payment_received', { timeout: 'PT1H' });

Each step is checkpointed, so if a worker crashes, the workflow resumes from the last completed step.

Tips and Best Practices

Tags:

Recommended

Discover More

The USB Drop Attack: A Modern Penetration Testing GuideGitHub Patches Critical RCE Bug in Git Push Pipeline – Zero-Day Exploit PreventedCISA's CI Fortify Initiative: Strengthening Critical Infrastructure Against Geopolitical Cyber Threats10 Game-Changing Features of Pyroscope 2.0 for Continuous Profiling at ScaleWhy AES-128 Remains Secure Against Quantum Computers: Debunking Common Myths