Monitoring Flows

Every time a Flow is executed—whether triggered manually or on a schedule—a Run is created. A Run captures the complete execution lifecycle of your Flow, including all nodes, tasks, and their statuses. This lets you trace the execution order, performance, and any issues that may arise.

View Runs

Viewing Runs

To view past executions:

  1. Navigate to your Flow and open the Runs tab.

  2. You will see a list of recent runs, each with metadata including:

    • Status (e.g. SUCCEEDED, FAILED, TIMED_OUT)

    • Trigger type (Manual, Scheduled)

    • Start and end timestamps

    • Total number of tasks

Inspecting Tasks in a Run

Click on any Run entry to drill down into its Tasks.

Each Task represents a single step in the orchestration, derived from one node (such as a transformation model, ingestion job, or custom script). Task rows include:

  • Task name

  • Status

  • Execution timestamps

  • Job count (including retries)

Task Details and Logs

To get more insight into what happened inside a task:

  1. Click on the three-dot context menu next to a task.

  2. Select Task Details.

This opens the Task Execution panel, showing:

  • The full task name

  • Trigger time and duration

  • Task status

  • A list of executed jobs

  • Logs for each job

Logs may include output from your SQL model, script stdout, or error messages if the job failed.

Understanding Run Statuses

Each Run and Task can end with one of the following statuses:

Status
Meaning

SCHEDULED

The run is planned for a future time.

QUEUED

The run is waiting for available resources.

RUNNING

The run is actively executing its nodes and tasks.

SUCCEEDING

Finalizing with successful results.

SUCCEEDED

All tasks completed successfully.

FAILING

Encountered errors and entering failure logic.

FAILED

The run was terminated due to errors.

ABORTED

The run was manually stopped before completion.

TIMED_OUT

The run exceeded its maximum execution time.

ABORTING

In the process of being intentionally terminated.

Best Practices

  • Monitor Run duration over time to detect pipeline slowdowns

  • Use logs to troubleshoot data quality issues or script errors

  • Use tags to organize and filter execution reports

Last updated

Was this helpful?