Skip to main content

Observability: Reading Inngest's Run Logs

You've built your functions. You've deployed to production. Now comes the part that separates a hobby project from a production system: knowing what's happening.

Background jobs fail silently. A cron function misses a run. A workflow gets stuck waiting for an event that will never arrive. A batch job starts timing out on the 10,000th user. Without observability, you find out about these problems when a user complains, a report is missing, or an engineer investigates a database that looks wrong.

With Inngest's observability layer, you find out before they do — and you have everything you need to fix it without guesswork.

This final article covers every observability tool Inngest provides: the metrics dashboard, the waterfall trace view, the Insights SQL interface, bulk operations, and external integrations like Datadog.


Quick Reference

Start here when something breaks:

  1. Functions list → failure rate column → which function is failing?
  2. Failed Functions chart → drilling into the top offenders
  3. Runs tab with Status: Failed filter → find specific failing runs
  4. Click any run → waterfall trace → which step, which error?
  5. Fix the bug → Replay the run

Health check at a glance: Functions page → Function Status pie chart → should be overwhelmingly green

Proactive alerting: Datadog integration → set alerts on failure rate thresholds

Ad hoc investigation: Insights tab → SQL query over your events and runs


What You Need to Know First

Required reading:

You should understand:

  • What a function run is and what statuses it can have
  • What steps are and how they relate to a run
  • The difference between a retry (within a run) and a replay (a new run)

What We'll Cover in This Article

By the end of this guide, you'll understand:

  • The Functions list page — your first-stop health dashboard
  • The seven function metric charts and what each tells you
  • The waterfall trace view — reading parallel and sequential step timelines
  • The Events tab in Inngest Cloud — searching and inspecting event history
  • Replaying failed runs after fixing a bug
  • Bulk cancellation — stopping many stuck runs at once
  • The Insights feature — SQL queries over your events and runs data
  • The Datadog integration — centralised external monitoring and alerting

What We'll Explain Along the Way

  • What "backlog" means and why it grows
  • The difference between steps throughput and runs throughput
  • Global Search — finding a specific run by event ID or run ID

Part 1: The Functions List — Your Health Dashboard

Navigate to Inngest Cloud → your environment → Functions. This is your first-stop view for production health.

The Functions list page provides the first round of essential information in one place with: triggers (events or cron schedule), failure rate (enabling you to quickly identify a surge of errors), and volume (helping in identifying possible drops in processing).

At a glance you can scan:

  • Trigger — what event or cron schedule activates this function
  • Failure rate — percentage of runs that have failed in the current time window
  • Volume — number of runs started in the time window

The failure rate column is the single most important column on this page. A function at 0% failure rate needs no attention. A function at 15% failure rate is actively losing work. A function at 100% failure rate has a systematic bug.

Functions list — what each column is telling you:

handle-user-signup user/account.created 0.2% 1,240/hr → Healthy
send-weekly-digest cron: 0 9 * * FRI 0.0% 52,000 → Healthy
process-payment order/placed 8.4% 890/hr → ⚠️ Investigate
send-contract-email contract/approved 100% 3/hr → 🚨 Broken

When you spot a function with an elevated failure rate, click it to open the function detail view — seven charts that give you the full picture.


Part 2: Seven Function Metric Charts

Each function's detail page contains seven charts filterable by time range (last 1 hour, 24 hours, 7 days, 30 days) and by specific app. All the above charts can be filtered based on a time range (ex: last 24 hours), a specific Function or App.

Chart 1: Function Status

A pie chart showing runs broken down by status: Completed, Failed, Cancelled.

The Function Status chart provides a snapshot of the number of Function runs grouped by status. This chart is the quickest way to identify an unwanted rate of failures at a given moment.

In a healthy function, this chart is almost entirely green (Completed). Any meaningful red slice (Failed) warrants investigation. A large grey slice (Cancelled) might indicate the cancelOn feature is triggering frequently — which may be expected (cart abandonment flows) or unexpected (a logic error).

Chart 2: Failed Functions

The Failed Functions chart displays the top 6 failing functions with the frequency of failures. You can leverage this chart to identify a possible elevated rate of failures and quickly access the Function runs details from the "View all" button.

This cross-function chart is useful on the dashboard overview level — it tells you which of your functions are failing most frequently right now, ranked by failure count. If you see process-payment climbing this list, you know where to focus.

Chart 3: Total Runs Throughput

The Total runs throughput is a line chart featuring the rate of Function runs started per app. This shows the performance of the system of how fast new runs are created and are being handled.

Use this chart to:

  • Verify a deploy worked — did throughput drop to zero after your deploy? The sync may have failed
  • Spot traffic spikes — a sudden spike in runs might mean a webhook fired more than expected
  • Confirm fan-out is working — if you fan out to 10,000 users, you should see 10,000 runs spike

Chart 4: Total Steps Throughput

The Total steps throughput chart represents the rate at which steps are executed, grouped by the selected Apps.

The Total steps throughput chart is helpful to assess the configuration of flow control settings like concurrency limits. If your concurrency is set to 50 and you see exactly 50 steps per second consistently (never more, never less), your queue has a backlog and 50 is your bottleneck — you may want to raise the limit.

Chart 5: Backlog

The backlog chart shows how many runs are queued but not yet executing — waiting for an available concurrency slot.

A healthy system has a near-zero backlog. A growing backlog means work is arriving faster than it can be processed. Possible causes: concurrency limit too low, workers too slow, external dependency rate-limiting you, or a burst of events from a fan-out.

Diagnosing a growing backlog:

Backlog growing steadily → concurrency limit too low for current load → raise the limit
Backlog growing then flattening → rate limit from external service → add RetryAfterError
Backlog sudden spike then draining → one-time burst event (fan-out) → this is normal
Backlog never drains → systematic slowness in steps → investigate step durations

Chart 6 & 7: Step Duration Percentiles

These charts show the p50, p90, and p99 step execution times. P50 is the median — half of steps take longer than this. P99 is the 99th percentile — only 1% of steps take longer.

These charts answer: "Are my steps getting slower?" A rising p99 while p50 stays stable indicates occasional outlier slowness — perhaps one database query that's slow under certain conditions. Rising p50 and p99 together indicates systematic slowdown across all executions.


Part 3: The Waterfall Trace View

When you click into an individual run, you see the waterfall view — a timeline of every step in the run, showing sequence, parallelism, and duration in a single visual.

The waterfall view clearly maps out the sequence and timing of function executions, including steps running in parallel. You can understand the flow of your entire workflow in seconds, simplifying navigation and revealing bottlenecks or inefficiencies at a glance.

Example waterfall for a payment processing function:

0ms 250ms 500ms 750ms 1000ms 1250ms 1500ms
| | | | | | |
[validate-order ] ← 210ms
[charge-card ] ← 980ms
[update-inventory] ← 180ms
[send-email ] ← 420ms
[log-completion] ← 50ms

Sequential steps (validate → charge) appear one after another. Parallel steps (update-inventory and send-email running simultaneously) appear side by side at the same horizontal position.

What the waterfall reveals

Bottlenecks — a single step that's disproportionately wide compared to others. In the example above, charge-card takes 980ms while everything else is under 500ms. If you're trying to reduce total run time, that's where to focus.

Unexpected serialisation — if you expected two steps to run in parallel but they appear sequential in the waterfall, your Promise.all() may not be set up correctly (perhaps you accidentally awaited individual steps instead of awaiting Promise.all).

Retry patterns — a step that retried shows multiple attempts stacked vertically within its slot, each attempt's duration visible. You can see exactly how long the backoff waited between attempts.

Step output inspection — click any step in the waterfall to expand it and see:

  • The step's return value (full JSON)
  • The error message if it failed
  • The stack trace from your code
  • Timing for each attempt
Clicking a failed step reveals:

Step: "charge-card"
Attempt 1: ❌ 30,012ms
Error: StripeError: Your card was declined.
Code: card_declined
Stack: at /api/inngest:45:23
at processPayment (/src/inngest/functions/payment.ts:62:18)

Attempt 2: ❌ 30,009ms
Error: StripeError: Your card was declined.
Code: card_declined

Attempt 3: ❌ (exhausted retries)
→ Function marked Failed

This is the information that makes onFailure handlers and alerting actionable — you know exactly what error occurred, in which step, on which attempt, with a full stack trace pointing to the exact line in your code.


Part 4: The Events Tab

The Events tab in Inngest Cloud is the event-level counterpart to the Runs tab. Every event that reaches Inngest — from inngest.send(), from the Test Event button, from a webhook — appears here with:

  • Event name and timestamp
  • Full JSON payload (expandable)
  • Which function runs it triggered (linked)
  • Whether it was deduplicated (if you sent the same event ID twice)

Using events for debugging

"Did my event actually arrive?" — Check the Events tab first. If the event isn't there, the problem is in your inngest.send() call or your environment configuration (INNGEST_EVENT_KEY missing, INNGEST_DEV=1 still set in production).

"Why didn't my function trigger?" — If the event is in the Events tab but shows "0 functions triggered," your function's trigger event name doesn't match the event name you sent. Check for typos: user/account-created vs user/account.created.

"Which runs came from this event?" — Click the event entry and follow the links to the triggered runs. This is useful for tracing a specific user's journey through your system — find their signup event, then trace through every function it triggered.

The Global Search bar (top of the Inngest Cloud dashboard) searches across events and runs simultaneously. Enter:

  • A run ID to jump directly to a specific run
  • An event ID to find the event and its triggered runs
  • A user ID or order ID (if stored in your event data) to find all events and runs related to that entity

This is invaluable for "a user reports something went wrong — find their specific run" debugging.


Part 5: Replaying Failed Runs

After fixing a bug, you need to re-run the affected functions with the original data. Replay is the tool for this.

In the Runs tab, filter by Status: Failed and the relevant time range. Find the runs that failed due to the bug you just fixed. Open any one of them and click Replay in the top right.

Replay creates a new run with the exact original event payload. Your fixed code executes fresh against the original data. The original failed run remains in the log for your reference.

Bulk replay

For bugs that caused many runs to fail, replaying one at a time is impractical. From the Runs tab with a Status: Failed filter applied, select multiple runs using the checkboxes and use the bulk Replay action to re-run all of them in one operation.

Bulk replay workflow:

1. Filter Runs by: Status = Failed, Time = "last 2 hours"
→ Shows 247 failed runs from the bug window

2. Select all 247 runs (checkbox at top of list)

3. Click "Replay" (bulk action button)
→ 247 new runs created with original event payloads
→ Your fixed code runs against each

4. Watch the new runs complete in the Runs tab
→ Filter by: Status = Completed, Time = "last 5 minutes"
→ Confirm the count matches

Part 6: Bulk Cancellation

Sometimes you don't want to retry — you want to stop. Common scenarios:

  • A bug caused a function to enter an infinite wait loop
  • A misconfigured fan-out created millions of stuck runs
  • A test gone wrong filled the queue with junk runs
  • A deployment issue caused thousands of runs to start with wrong data

The Bulk Cancellation feature via the Inngest UI streamlines function cancellations in Inngest, making event recovery faster and more efficient. It allows users to cancel function runs in bulk based on criteria like time ranges or function states, providing better control even when functions are paused.

From the Runs tab, filter by the criteria that identify the runs you want to cancel (function name, time range, status), select them, and use the bulk Cancel action. Cancelled runs stop immediately — no further steps execute, no retries are scheduled, and onFailure is not triggered (cancellation is intentional, not a failure).

Bulk cancel workflow for cleaning up stuck runs:

1. Filter Runs by: Function = "send-weekly-digest", Status = Sleeping
→ 50,000 runs sleeping due to a cron bug

2. Select all → Bulk Cancel
→ All 50,000 runs cancelled immediately

3. Fix the bug → Re-trigger the cron via Invoke
→ Fresh run starts cleanly

Part 7: Insights — SQL Over Your Event and Run Data

Inngest Insights is the ability to query your events and runs using SQL directly in the dashboard.

Navigate to Inngest Cloud → your environment → Insights. You'll find a SQL editor where you can query across all your event and run data without any external tooling.

Particularly powerful for AI workflows — track token usage, model calls, and agent performance directly from your workflow data.

Example queries

Which events are most common in the last 7 days?

SELECT event_name, COUNT(*) as event_count
FROM events
WHERE created_at > NOW() - INTERVAL '7 days'
GROUP BY event_name
ORDER BY event_count DESC;

What's the average step duration for my payment function?

SELECT
step_name,
AVG(duration_ms) as avg_ms,
PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration_ms) as p95_ms,
COUNT(*) as executions
FROM function_steps
WHERE function_id = 'process-payment'
AND started_at > NOW() - INTERVAL '24 hours'
GROUP BY step_name
ORDER BY avg_ms DESC;

How many users completed the onboarding flow this week?

SELECT
DATE_TRUNC('day', created_at) as day,
COUNT(*) as completions
FROM events
WHERE event_name = 'user/onboarding.completed'
AND created_at > NOW() - INTERVAL '7 days'
GROUP BY day
ORDER BY day;

Which function runs took more than 60 seconds?

SELECT
function_id,
run_id,
started_at,
duration_ms / 1000.0 as duration_seconds
FROM function_runs
WHERE duration_ms > 60000
AND started_at > NOW() - INTERVAL '24 hours'
ORDER BY duration_ms DESC
LIMIT 50;

Insights removes the need for a separate analytics pipeline for workflow data. Your events and runs are already in Inngest — query them directly rather than piping them to a data warehouse for basic operational questions.


Part 8: Datadog Integration

For teams who centralise monitoring in Datadog, Inngest provides a native integration. Send key metrics about your Inngest functions and their underlying steps directly to your Datadog account. This allows you to proactively identify errors or performance issues, right alongside the monitoring you use for the rest of your infrastructure.

Setup: In Datadog, navigate to Integrations, select the Inngest tile, and click Install Integration. Then in your Inngest Cloud dashboard, navigate to Settings → Integrations → Datadog and connect your account.

Once connected, Inngest exports metrics like:

  • inngest.function.runs — run count by status (completed, failed, cancelled)
  • inngest.function.duration — run and step duration percentiles
  • inngest.function.backlog — queue depth per function
  • inngest.steps.throughput — step execution rate

With these metrics in Datadog, you can:

Set up alerting:
→ Alert when inngest.function.failure_rate > 5% for any function
→ Alert when inngest.function.backlog > 1000 for payment processing
→ Alert when inngest.function.duration.p99 > 30s for critical functions
→ Page on-call when inngest.function.failure_rate = 100% (systematic failure)

This brings Inngest metrics into your existing alerting system — the same dashboards, the same PagerDuty integration, the same runbooks your team already uses.


Part 9: A Production Debugging Playbook

Let's put the whole observability stack together into a practical debugging flow. Here's how to move from "something is wrong" to "bug fixed and data recovered" as quickly as possible.

Step 1: Identify the scope

Open Inngest Cloud → Functions list. Scan the failure rate column.

  • One function failing → isolated bug in that function
  • Many functions failing → systemic issue (deployment problem, external service outage, configuration change)
  • All functions failing → signing key mismatch, serve endpoint down, or your app is down

Step 2: Find a representative failing run

Click the failing function → Runs tab → filter by Status: Failed. Click the most recent failing run.

In the waterfall, identify:

  • Which step failed? (the red step in the waterfall)
  • What error? (click the step to expand)
  • Is it retrying or exhausted? (multiple attempts vs. final failure)

Step 3: Classify the error

Network/timeout error (ETIMEDOUT, ECONNREFUSED)
→ External service is down or overloaded
→ Check the external service's status page
→ Retries will auto-recover when the service comes back

Validation/logic error (TypeError, undefined, null reference)
→ Bug in your code or unexpected event payload shape
→ Fix the code, deploy, then replay the failed runs

Rate limit (429, RetryAfterError in the stack)
→ External API rate limiting
→ Add RetryAfterError if not already present
→ Runs will recover automatically on their retry schedule

Authentication error (401, 403)
→ API key expired or rotated without updating your environment
→ Update the credential and redeploy
→ Replay the failed runs

Data not found (NonRetriableError in the stack)
→ Expected data doesn't exist
→ Check if this is a legitimate data issue or a sequencing bug
→ Investigate the triggering event payload

Step 4: Fix and recover

  1. Fix the root cause — code change, environment variable update, external service restored
  2. Deploy — get the fix live
  3. Verify the fix — trigger a fresh test run via Invoke, confirm it completes
  4. Replay failed runs — bulk replay all runs that failed during the incident window
  5. Monitor — watch the failure rate drop in the Functions list chart

Step 5: Prevent recurrence

  • For systematic failures: set up a Datadog alert on failure rate threshold so you're alerted before users notice
  • For unexpected payloads: add input validation with NonRetriableError to fail fast on bad data rather than retrying futilely
  • For external dependency failures: verify your RetryAfterError handling is in place and onFailure sends a meaningful alert

Common Misconceptions

❌ Misconception: A high failure rate means all those users are affected

Reality: Thanks to retries, many "failed" runs in the failure rate chart are temporary — they failed on one attempt but will succeed on a later retry. The failure rate chart counts runs that permanently failed (exhausted all retries), not all runs that had any failed attempt.

Check the Function Status pie chart: if you see Failed runs, those are permanent failures. Runs that are currently retrying show as Running — they haven't permanently failed yet.

❌ Misconception: Replay re-uses the memoised state from the original failed run

Reality: Replay creates a brand new run with an empty step state. Every step executes fresh. This is intentional — you want your fixed code to run all the steps, not skip to the point of failure. See Article 11 for the full replay vs. memoisation distinction.

❌ Misconception: Bulk cancellation triggers onFailure handlers

Reality: Cancellation is intentional — it is not treated as a failure. onFailure handlers do not run for cancelled runs. If you need to run cleanup logic when cancelling, do it manually after the cancellation (for example, update database records to reflect cancelled status).

❌ Misconception: The waterfall shows your function's source code execution order

Reality: The waterfall shows step execution order — not every line of your code. Code outside step.run() runs on every handler execution and doesn't appear in the waterfall. Only step.run(), step.sleep(), step.waitForEvent(), and step.sendEvent() calls appear as distinct entries in the trace.


Troubleshooting: Five Common Production Scenarios

Scenario 1: "My cron function didn't run last night"

Check:

  1. Functions list → find the cron function → look at Volume for the expected time window
  2. Runs tab → filter by this function → any runs around the scheduled time?
  3. Apps tab → is the app synced? (a sync failure at deploy time can un-register the function)
  4. Verify the cron expression with crontab.guru

Most likely cause: The function was un-registered from Inngest Cloud (deploy without re-sync), the cron expression has a timezone mismatch, or the function hit a DST skip (see Article 9).

Scenario 2: "Functions are running but backlog is growing"

Check:

  1. Backlog chart → is it growing steadily or spiked and draining?
  2. Steps throughput chart → is it at exactly your concurrency limit?
  3. Step duration chart → are individual steps taking longer than usual?

Most likely cause: Concurrency limit too low for current traffic, or an external dependency has slowed down (database under load, API latency increased). Raise concurrency limit if the throughput is artificially capped, or investigate the slow step if durations have increased.

Scenario 3: "A function is stuck in 'Waiting' for days"

Check:

  1. Click the run → waterfall → which step.waitForEvent() is it waiting on?
  2. Check the event name and match expression — was the expected event ever sent?
  3. Events tab → search for the expected event name → does it exist? Did it arrive before or after the waitForEvent started?

Most likely cause: The waited-for event was sent before step.waitForEvent() registered (the race condition from Article 8), or the event name has a typo.

Resolution: If the event genuinely won't arrive (data was incorrect), cancel the stuck run and re-trigger the workflow with correct data.

Scenario 4: "After a deploy, my functions stopped running"

Check:

  1. Inngest Cloud → Apps tab → is your app showing a sync error?
  2. Run curl https://your-app.com/api/inngest — does it respond correctly?
  3. Check hasSigningKey: true and mode: "cloud" in the response

Most likely cause: The new deploy broke the serve endpoint (import error, build error), or the deploy changed the serve path without updating the sync URL.

Scenario 5: "I see duplicate runs for the same event"

Check:

  1. Events tab → find the event → how many runs does it show as "triggered"?
  2. Check if the event appears twice in the log (two separate events with different IDs)
  3. Check your producer code — is inngest.send() called in multiple places for the same action?

Most likely cause: Event-level deduplication wasn't set up (no id on inngest.send()), or a webhook is firing twice. See Article 10 for the full deduplication setup.


Check Your Understanding

Quick Quiz

1. The Function Status pie chart shows 12% of runs as "Failed." You fix the underlying bug and deploy. What do you need to do to recover the affected data?

Show Answer

Bulk replay the failed runs. In the Runs tab, filter by Status: Failed and the time window of the incident. Select all affected runs and use the bulk Replay action. Each replay creates a new run with the original event payload, executing your fixed code. The failed runs remain in the log for reference — replays are new separate runs.

Monitor the new runs as they complete to confirm the fix worked. Watch the failure rate in the Functions list drop to near-zero.

2. The backlog chart for your weekly digest function is holding steady at 8,000 runs. Your concurrency limit is 50. The digest sends to 50,000 users. Is this a problem?

Show Answer

Not necessarily — it depends on context. A backlog of 8,000 with concurrency 50 means there are 8,000 runs queued waiting for one of the 50 available slots. This will drain over time as runs complete and new slots open.

The key question is: does the backlog drain before the next cron run? If your weekly digest runs Friday at 9 AM and the backlog is fully drained by Saturday — no problem. If the backlog is still growing or not draining, that's a concern.

Use the Total Runs Throughput chart to estimate drain rate: if you're processing 180 runs/minute and have 8,000 in backlog, the backlog clears in ~44 minutes. For a weekly digest, that's fine.

If you want faster processing, raise the concurrency limit (verify the email provider can handle higher throughput first).

3. You see a run stuck in Waiting status for 6 days. The function has a step.waitForEvent() with a timeout: "7d". Should you intervene?

Show Answer

Probably not — yet. The function is doing exactly what it was designed to do: waiting up to 7 days for a specific event. After 7 days, step.waitForEvent() returns null and the function continues with the timeout path.

However, investigate whether this is expected behaviour or an anomaly. Questions to ask:

  • Is the waited-for event something that should realistically arrive within 7 days for this specific user/entity?
  • Has the event arrived but with a mismatched field (so the function isn't recognising it)?
  • Is this one stuck run or are many runs stuck?

If this represents stuck data that will never resolve (the event will never arrive due to a data or logic error), cancel the run and re-trigger with corrected data. If it's expected behaviour (waiting for a document signature, an email confirmation, etc.), let it run to timeout.


Summary: Key Takeaways

  • Functions list is your health dashboard — failure rate and volume columns tell you which functions need attention at a glance.
  • Seven metric charts per function: Status pie chart, Failed Functions ranking, Runs throughput, Steps throughput, Backlog, and step duration percentiles. Each answers a specific question about function health.
  • Waterfall trace view shows sequential and parallel step execution on a timeline — click any step for its return value, error, and stack trace. Inspired by OpenTelemetry tracing.
  • Events tab answers "did my event arrive?" and "why didn't my function trigger?" — search by event name, find linked runs, trace user journeys.
  • Replay creates a new run with the original event payload and your fixed code. Use bulk replay after fixing a bug that caused many failures.
  • Bulk cancellation stops many runs simultaneously — does not trigger onFailure handlers.
  • Insights provides SQL access to your events and run data directly in the dashboard — no external pipeline needed for operational queries.
  • Datadog integration exports function metrics to your existing monitoring stack — set failure rate alerts alongside your other infrastructure alerts.
  • The debugging playbook: identify scope → find representative failing run → classify the error → fix and deploy → replay failed runs → monitor.

What's Next?

Congratulations — you've completed the full Inngest and Event-Driven Workflows module.

You started by understanding why event-driven architecture exists and what problems it solves. You built the vocabulary: events, queues, workers, producers, consumers. You learned what Inngest is, installed it, wrote your first function, mastered steps, handled errors, built fan-out patterns, coordinated with step.waitForEvent(), scheduled with cron, made everything idempotent, and now you can observe and debug it all in production.

The full arc:

Article 1  → Why event-driven architecture exists
Article 2 → Events, queues, workers — the vocabulary
Article 3 → What Inngest is and how it fits in
Article 4 → Your first function
Article 5 → Steps and durable execution
Article 6 → Retries and error handling
Article 7 → Fan-out patterns
Article 8 → step.waitForEvent() and coordination
Article 9 → Scheduled functions with cron
Article 10 → Idempotency
Article 11 → Local development with the Dev Server
Article 12 → Deploying to production
Article 13 → Observability and debugging ← you are here

Where to go from here: the Inngest documentation covers flow control (throttling, rate limiting, debouncing, batching), middleware, the Realtime feature for streaming updates to clients, and AI workflow patterns with AgentKit. The foundation you've built here makes all of it accessible.


Version Information

Tested with:

  • inngest: ^4.1.x
  • Node.js: v18.x, v20.x, v22.x
  • TypeScript: 5.x

Features noted as new or beta:

  • Insights (SQL queries): Public beta as of September 2025 — available on all plans
  • Waterfall trace view: Generally available — released September 2024
  • Bulk cancellation: Generally available

Further reading: