Deliverables
Deliverables
What Are Deliverables?
Deliverables are the output documents produced by an engagement's AI crew. Each task in an engagement generates one deliverable -- a markdown file containing the agent's work product. Examples include strategy reports, market analyses, implementation plans, design briefs, and research summaries.
Deliverables are stored in the database with their full content, word count, version number, and approval status.
Viewing Deliverables
On an engagement's detail page, the Deliverables panel lists all client-facing deliverables produced by the most recent run. Internal engine files (run reports, operations reviews, library entries) are automatically separated into a collapsible Run Analytics section below.
Each deliverable entry shows:
- Task name -- The human-readable task name from the engagement config (e.g., "Market Analysis & Social Media Strategy") instead of the raw filename. The original filename is shown in smaller text for reference.
- Content preview -- A ~150 character preview of the deliverable's content (markdown stripped).
- Version -- The current version number (v1, v2, etc.).
- Word count -- Total words in the deliverable.
- Approval status -- A color-coded dot indicating the review state.
- Date -- When the deliverable was created.
Click any deliverable to open its detail view, which renders the markdown content in full.
Run Analytics
After a run completes, a collapsible Run Analytics section appears below the deliverables list. This section contains:
- Agent Participation -- A grid of contributing agents with their department (color-coded), step count, and LLM model(s) used. Data is loaded from the analytics API.
- Internal files -- Grouped by category: Run Reports, Operations Reviews, and Library Entries. These are engine analytics files, not client deliverables.
Deliverable Provenance
At the bottom of each deliverable's detail page, a Provenance section shows which AI agent produced the deliverable, which LLM model was used, the observer quality score (if available), and the token cost for that specific task.
Versioning
Each time an engagement is re-run, the engine creates new versions of the deliverables rather than overwriting existing ones. Versions are numbered sequentially (v1, v2, v3, and so on).
Version Timeline
On the deliverable detail page, a Versions sidebar on the right displays all versions of that deliverable. Each version entry shows:
- Version number
- Word count
- Creation date
- Approval status (as a color-coded dot)
Click View on any version to switch to viewing that version's content.
Diff Viewer
To compare two versions side by side, click Compare on any version in the timeline. This opens the diff viewer, which shows:
- A header indicating which versions are being compared (e.g., "v1 -> v2").
- Line-by-line differences with color highlighting:
- Green (+) lines indicate added content.
- Red (-) lines indicate removed content.
- Unchanged lines appear in the default text color.
- A summary count of added and removed lines at the top.
The compare link constructs a URL with ?version=X&compare=Y query parameters, so you can share diff links with teammates.
Approval Workflow
Every deliverable goes through a review process before it is considered final. The approval workflow provides structured quality control over AI-generated output.
Approval Statuses
- Pending Review -- The default status when a deliverable is first created. Indicated by a yellow dot.
- Approved -- The deliverable has been reviewed and accepted. Indicated by a green dot.
- Revision Requested -- The reviewer wants changes. Indicated by a blue dot.
- Rejected -- The deliverable does not meet requirements. Indicated by a red dot.
Using the Approval Panel
Below the deliverable content, the Review panel displays the current approval status and provides action buttons:
- Approve -- Mark the deliverable as accepted. No notes are required (though you can add them).
- Request Revision -- Indicate that changes are needed. When you click this button, a notes text area appears. Enter your feedback describing what needs to change, then confirm. Notes are required for revision requests.
- Reject -- Mark the deliverable as not acceptable. Like revision requests, a notes text area appears and feedback is required.
After any review action, the panel updates to show:
- Who reviewed the deliverable (reviewer name or email).
- When the review occurred.
- Any review notes provided.
Review actions are tracked in the audit log and fire webhook events (deliverable.approved, deliverable.revision_requested, deliverable.rejected).
Targeted Re-Run
If a deliverable needs revision after review, you can re-execute just that specific task without re-running the entire engagement. Click the Rerun action on the deliverable to trigger a targeted re-run of the associated task. This creates a new version of only that deliverable while leaving all other deliverables untouched.
This is particularly useful when combined with the approval workflow: request a revision, add notes explaining what to change, then trigger a targeted re-run.
PDF Export
Deliverables can be exported as professionally formatted PDF documents. The PDF renderer supports headings (H1-H4), bold, italic, lists, code blocks, blockquotes, markdown tables, and horizontal rules. Headings are protected against orphaning -- a heading near the bottom of a page is automatically pushed to the next page so it always appears with its following content.
Single Deliverable PDF
From any deliverable detail page, export that individual deliverable as a PDF. The PDF includes the rendered markdown content with proper formatting.
Full Engagement Report
From the engagement detail page, export a complete report PDF that bundles all deliverables from the engagement into a single document. The report includes:
- A cover page with the engagement name, client, and date.
- Tenant branding (logo and primary color, if configured).
- All deliverables in task order.
Research Brief Appendix
If your engagement used the Brief Researcher to refine the brief through an AI interview, the research transcript (00_brief_research_transcript.md) is always excluded from the main deliverable sections of the PDF. This keeps exported reports focused on final work product.
To include the research process in your PDF, check the "Include research brief" checkbox next to the export button. When checked, an Appendix: Research Brief & Interview page is added at the end of the PDF containing:
- Refined Brief -- The final brief text that was used to drive the engagement.
- Research Interview Transcript -- The full AI-assisted interview that refined the original brief (if a research session was conducted).
By default, the research appendix is excluded. The toggle is available on both the dashboard and the client portal.
AI Provenance Toggle
Next to every PDF export button, there is an "Include AI provenance" checkbox. When checked:
- Full report PDF: An "AI Provenance" appendix page is added, listing which agent and model produced each deliverable, along with observer scores and costs.
- Single deliverable PDF: A provenance footer section is added at the bottom of the page.
By default, provenance is excluded from PDFs (it's meta-information, not client-facing content). The toggle is available on both the dashboard and the client portal.
PDF exports are generated using @react-pdf/renderer and include your tenant's custom branding when available. A report.generated webhook event fires after PDF generation.
Word Count
Every deliverable tracks its word count, visible in both the deliverables list and the version timeline. This helps you gauge the depth and completeness of each output.
Task-Level Status Indicators
After a run completes, the Service Plan on the engagement detail page shows deliverable status indicators on each task card. A green "Delivered" pill means the task produced a standalone deliverable (with word count). An amber "Content included in" pill means the task's output was absorbed into a named downstream deliverable. This cross-references the deliverables list with the task pipeline so you can quickly spot gaps. See Engagements > Service Plan for details.
Related Guides
- Engagements -- Creating and running engagements that produce deliverables.
- Templates -- Saving engagement configurations for reuse.
- File Uploads -- Attaching reference documents that the AI crew can use during runs.