Calafai Docs

Engagements

Engagements

What Is an Engagement?

An engagement is an AI-powered consulting project. Each engagement defines a brief (the problem statement or objective), a set of tasks for AI agents to execute, and a budget cap in USD. When you run an engagement, the Groundtruth engine dispatches a crew of specialized AI agents that collaborate to produce deliverables -- strategy documents, research reports, implementation plans, and more.

Every engagement belongs to a single tenant and is scoped by your subscription plan.

Creating an Engagement

Navigate to Dashboard > Engagements and click New Engagement. The creation flow is a four-step wizard with a progress stepper at the top.

Step 1: Brief & Resources

The first step has two phases: a form phase and a resources phase.

Form phase -- provide the core engagement details:

  • Name -- A descriptive title (e.g., "European Market Expansion Strategy"). The platform generates a URL-safe slug automatically.
  • Client -- The client or project this engagement is for.
  • Brief -- A detailed text description of what you need (minimum 50 characters). Write it as if you were briefing a senior consultant. A placeholder example is shown in the textarea.
  • Budget -- Maximum spend in USD. Defaults to $5.00. The form shows contextual guidance based on your budget level:
    • Below $3: "Focused sprint"
    • $3--7: "Standard engagement"
    • $7--15: "Comprehensive engagement"
    • Above $15: "Full-depth engagement"

After clicking Continue, the engagement is created and you enter the resources phase.

Resources phase -- optionally attach context materials:

  • Style Guide (optional) -- Upload a brand style guide, PowerPoint template, or example document that shows how outputs should look and feel. Supports drag-and-drop file upload and URL fetching. Files are tagged with the styleguide category.
  • Design Adherence -- A slider from 0 (Explore -- full creative freedom) to 100 (Replicate -- pixel-perfect replication). Controls how closely the AI crew follows uploaded style guides.
  • Reference Materials (optional) -- Upload market research, competitive analyses, data files, or any relevant documents. Supports drag-and-drop and URL fetching. Files are tagged with the reference category.

See File Uploads for details on supported file types and storage limits.

Step 2: Research

An optional AI-assisted brief refinement interview. The Brief Researcher asks a series of targeted questions to deepen and sharpen the engagement brief.

  • An introduction screen shows the estimated number of questions and time.
  • Questions are asked one at a time. For each question, you can Submit an answer, Skip it, or End Interview early.
  • Progress is tracked (e.g., "Question 3 of 8").
  • After all questions are answered (or the interview is ended), the system synthesizes the responses into a refined brief that replaces the original.

The research transcript is saved and can optionally be included as an appendix in PDF exports. You can skip this step entirely and proceed to planning.

Step 3: Plan

The task planner generates a service plan (task pipeline) for the engagement.

  • Template selector -- A dropdown at the top lets you choose a template to base the plan on, or select "Generate from brief" (the default) to have the AI design the plan from scratch. See Templates for available templates.
  • Generate -- Click to generate the plan. A spinner shows "Designing your service plan..." while the AI creates tasks with dependencies, agent assignments, and expected outputs.
  • Review -- Tasks are displayed grouped by phase (dependency level), showing task names, assigned agents, and dependency links. A summary shows the total task count and phase count (e.g., "8 tasks across 3 phases").
  • Feedback & Regenerate -- A text area lets you provide feedback (e.g., "Add a competitive analysis task" or "Remove social media tasks"). Click Regenerate to update the plan based on your feedback.
  • Approve -- Click Mark as reviewed and then Approve plan to finalize the plan. You must approve the plan before proceeding to launch.

Step 4: Launch

A read-only summary of the engagement before launching:

  • Engagement info -- Name, client, brief preview (first 300 characters), and budget.
  • Team -- Lists the unique AI agents that will participate, with a count (e.g., "6 specialists").
  • Service Plan -- Task count and a grid showing the first six tasks (with a "+N more" indicator if there are additional tasks).

Click Launch the team to start the run. The wizard redirects to the engagement detail page where Mission Control takes over.

An active subscription is required to create and run engagements. Your plan determines the monthly engagement limit.

Running an Engagement

Once an engagement is launched (or re-run from the detail page), it enters running status. This:

  1. Creates a new Run record in the database.
  2. Sets the engagement status to running.
  3. Sends the engagement configuration to the engine for execution.
  4. Triggers an email notification to the engagement creator.
  5. Fires an engagement.started webhook event.

Mission Control

When an engagement is running (or paused), the detail page switches to a full-screen Mission Control panel that replaces the standard Service Plan and task list. Mission Control provides real-time visibility into the AI crew's progress through several sections:

Status Bar

A sticky bar at the top showing:

  • Status indicator -- Pulsing dot with "Mission Active" or "Paused" label.
  • Phase dots -- Small colored dots representing each execution level. The current phase is highlighted and labeled (e.g., "Level 2").
  • Elapsed time -- Live counter in HH:MM:SS format.
  • Cost arc -- Animated cost display showing the current estimated cost against budget. Color-coded: green (below 60%), yellow (60--90%), red (above 90%).
  • Task progress -- Completed/total count (e.g., "3/7 tasks").
  • Agent count -- Number of active agents.
  • Action buttons -- Pause and Stop (when running) or Resume (when paused).
  • Connection indicator -- Shows whether the page is receiving live SSE events or falling back to polling.

Stall Detection

If an LLM provider becomes unresponsive for an extended period, an amber warning banner appears below the status bar showing the provider name, the task that is stalled, and the idle duration. A link to the provider's status page is included when available. The engine automatically engages fallback models. The banner can be dismissed.

Task Pipeline

Tasks are displayed in one of two views, toggled via buttons:

  • DAG View (default) -- A directed acyclic graph showing tasks grouped by execution level. Each task card displays:

    • Status icon (shape + color, WCAG-accessible):
      • Circle (gray) -- Pending
      • Play triangle (blue, pulsing) -- Running, with a live mm:ss timer
      • Checkmark (green) -- Completed, with observer quality score (e.g., "8/10")
      • X (red) -- Failed
      • Refresh arrow (yellow) -- Re-running, with execution count badge (e.g., "x2")
    • Task name
    • Dependency edges (gray for pending, green for completed, blue dashed for active)
    • A minimap for navigation on larger graphs
  • Timeline View -- A Gantt-style horizontal timeline showing task bars scaled to elapsed time. Each bar is color-coded by status (green/blue/red/yellow) and shows the duration. Tasks are grouped by level with a time axis. A summary footer shows completion count and total elapsed time.

Progress bars show the completion percentage for each level.

Crew Floor

Agent cards in a two-column grid. Each card shows:

  • Department-colored left border and badge (amber for C-Suite, purple for Communications, cyan for Research, blue for Product, emerald for Engineering, orange for Operations, slate for Support, rose for Regulatory).
  • Agent role and department label.
  • Current activity and tool being used (if active).
  • Status: Active (green glow, pulsing), Delegating (yellow glow), or Idle (faded, after 15+ seconds of inactivity).
  • SVG overlay showing animated delegation arcs between agents when delegation occurs.

Cards are sorted with active agents first.

Activity Stream

A filterable feed of agent actions. Filter chips at the top: All, Delegations, Tool Use, Completions, Warnings. Each entry shows:

  • Timestamp (HH:MM:SS)
  • Department-colored status dot
  • Agent role and action description
  • Delegation target (if delegating)
  • Tool name (if using a tool)
  • Special styling for stall warnings (amber) and budget-skip warnings (yellow)

The feed auto-scrolls to show new entries. If you scroll up to review history, a "New activity" button appears to jump back to the bottom.

When the run completes or fails, Mission Control disappears and the standard layout returns.

Skipped Operations

If the engine skips tasks due to budget constraints, a Skipped Operations card appears on the engagement detail page after the run. It lists each skipped task with the operation name, skip reason, and the percentage of budget used at the time of the skip.

Pausing

Click Pause on a running engagement to save the current execution state. The engine persists a pause snapshot so that work already completed is not lost. The engagement status changes to paused. Click Resume to continue from where it left off. Resumed runs inject full upstream deliverable content into dependent tasks.

Stopping

Click Stop to terminate a running engagement immediately. The run is marked as failed and the engagement status updates accordingly. This is a hard stop -- any in-progress tasks are abandoned.

Status Lifecycle

Every engagement moves through these statuses:

pending --> running --> completed
                   \-> failed
                   \-> paused --> running (resume)
  • pending -- Created but not yet started. This is the initial state.
  • running -- The engine is actively executing tasks.
  • completed -- All tasks finished successfully. Deliverables are available.
  • failed -- The run was stopped manually or encountered an error.
  • paused -- Execution was paused. Can be resumed.

Editing an Engagement

On the engagement detail page, the Brief section shows an Edit button (disabled while a run is active). Clicking it opens an inline edit form where you can modify the engagement name, client, brief, and budget. Changes are saved via a PATCH request. A Save as reusable framework button is also available to capture the current configuration as a reusable framework.

Re-Running an Engagement

Engagements with status completed or failed can be re-run by clicking the Re-run button. Re-running creates a new Run record and increments the version number of any deliverables produced. Previous versions remain accessible in the version history.

You cannot start a new run while an engagement is already running.

Service Plan

The Service Plan panel on the engagement detail page shows the task pipeline -- all tasks grouped into execution phases based on their dependency graph. Each task card displays the assigned agent, expected output, and dependency links. This panel is visible when the engagement is not actively running.

Deliverable Status Indicators

After an engagement run completes (or fails), each task card in the Service Plan shows a status indicator reflecting whether that task produced a deliverable:

  • Delivered (green pill) -- A deliverable file exists for this task. The word count is shown in parentheses (e.g., "Delivered (2,450 words)").
  • Content included in: ... (amber pill) -- This task did not produce a separate deliverable file, but its output was absorbed into a downstream task's deliverable. The pill names the downstream task(s). This is determined by walking the task dependency graph to find the nearest downstream task that has a deliverable.
  • No indicator -- Shown for pending engagements (before any run), or when neither of the above applies.

Engagement Detail Page

The engagement detail page contains the following sections (in order):

  1. Header -- Engagement name with status badge, client name, and delete button.
  2. Brief -- The engagement brief with edit and save-as-template controls. Shows budget, creation date, slug, last run cost, and total engagement cost.
  3. Mission Control or Service Plan -- Mission Control during active/paused runs; Service Plan when idle.
  4. Skipped Operations -- Shown if the last run skipped tasks due to budget.
  5. Quality Scores -- Per-task observer scores (expandable cards with criteria breakdown, insights, issues, and rerun recommendations).
  6. Export -- PDF export button for the full engagement report.
  7. Deliverables -- Client-facing deliverables from the most recent run.
  8. Attachments -- File upload and management (hidden during active runs).
  9. Run Analytics -- Collapsible section with agent participation data and internal engine files.
  10. Portal Token Manager -- Client portal sharing controls.
  11. Run History -- Timeline of all runs with status indicators, costs, and timestamps.

Deleting an Engagement

Engagements can be deleted from the engagement detail page. However, you cannot delete an engagement while it is in running status. Stop or wait for it to complete first.

Deleting an engagement removes all associated runs, deliverables, and attachments.

Engagement List View

The main Engagements page shows all engagements for your tenant, ordered by creation date (newest first). Each row displays:

  • Engagement name
  • Client name
  • Current status (with color-coded indicator)
  • Creation date
  • Slug

The list is cached for performance (via Redis) and invalidated whenever you create, update, or delete an engagement.

  • Templates -- Reusable engagement configurations for the task planner.
  • File Uploads -- Attaching reference documents and style guides.
  • Deliverables -- Viewing, reviewing, and exporting engagement output.
  • Analytics -- Quality scoring and performance trends.
  • Client Portal -- Sharing deliverables with external stakeholders.

On this page