Overview
Technical due diligence on acquisition targets traditionally relies on CTO interviews, manual code sampling, and documentation review — covering less than 10% of the actual codebase. Critical risks hide in the 90% nobody reads: hardcoded secrets in utility modules, GPL dependencies buried three layers deep, PII flowing unencrypted through forgotten microservices. CoreStory changes this equation. By ingesting and semantically understanding the target’s entire codebase, it gives your diligence team an AI-powered analyst that can answer specific risk questions against the full codebase — not a sample. The agent operates as an Oracle (explaining system behavior, architectural patterns, and data flows) and a Navigator (pointing to specific files, methods, and code paths where risks live). Who this is for: M&A professionals, PE portfolio teams, technical diligence consultants, and acquiring engineering teams evaluating technology-heavy acquisitions. What you’ll get: A structured workflow for interrogating a target codebase across four risk domains — technical debt, security, data/PII compliance, and integration complexity — producing auditable findings with specific file-level evidence.Prerequisites
- A CoreStory account with the target codebase ingested and ingestion complete
- An AI coding agent with CoreStory MCP configured (see Supercharging AI Agents for setup)
- Read access to the target’s repository (for the agent to cross-reference CoreStory findings against source)
How It Works
CoreStory MCP Tools Used
This playbook uses the following tools from the CoreStory MCP server:| Tool | Role in Diligence |
|---|---|
list_projects | Identify and confirm the target project |
get_project_prd | Retrieve the synthesized Product Requirements Document for business context |
get_project_techspec | Retrieve the Technical Specification for architecture, data models, and security analysis |
create_conversation | Open a named diligence thread for each audit workstream |
send_message | Interrogate the codebase — the primary investigation tool |
rename_conversation | Mark completed threads with “RESOLVED” prefix for audit trail |
list_conversations | Review existing diligence threads |
get_conversation | Retrieve conversation history for report synthesis |
The Diligence Workflow
M&A technical diligence with CoreStory follows a four-phase pattern:- Setup — Confirm the target project, review synthesized specs for architectural orientation, and create named conversation threads for each workstream.
- Interrogate — Use
send_messageto ask specific risk questions. CoreStory answers from semantic understanding of the full codebase, citing specific files and code paths. - Cross-reference — Validate critical findings against the actual source code. CoreStory provides the file paths and context; the agent (or your team) confirms.
- Synthesize — Compile findings into structured reports. Conversation history provides the audit trail.
Oracle Before Navigator
The “Oracle before Navigator” principle is especially important in diligence. Before searching for specific vulnerabilities or code paths, first use CoreStory to understand how the system is designed to work — its intended architecture, data flow patterns, and security model. This baseline makes it far easier to spot deviations, shortcuts, and risks.Step-by-Step Walkthrough
Phase 1: Project Setup and Orientation
Start every diligence engagement by confirming the target and building architectural context. Confirm the target project:list_projects and returns your available projects. Confirm the correct project before proceeding — this is a critical safety step when multiple targets may be under evaluation simultaneously.
Review synthesized specifications:
get_project_techspec and get_project_prd to retrieve CoreStory’s synthesized understanding of the codebase. This gives you architectural orientation before diving into risk-specific queries.
Create diligence threads:
Phase 2: Risk Interrogation
With architectural context established, usesend_message to interrogate the codebase across risk domains. Each query goes through the conversation thread, and CoreStory draws on its semantic understanding of the entire codebase to answer.
The key principle: Ask specific questions. “Tell me about security” produces vague answers. “Identify all code locations that handle authentication tokens and describe how they’re stored, transmitted, and rotated” produces actionable findings with file paths.
The following sections provide query patterns organized by risk domain. Use the ones relevant to your diligence scope.
Technical Debt & Obsolescence
Security & Secrets
Licensing & Open Source Risk
Data Handling & PII
Architecture & Integration Complexity
Phase 3: Cross-Reference and Validation
CoreStory provides findings with specific file paths and code context. For critical findings — especially security issues, licensing risks, and PII exposure — validate against the actual source code.Phase 4: Synthesis and Reporting
After completing your interrogation queries, synthesize findings into structured reports. The conversation history serves as your evidence base.list_conversations and get_conversation to review the full diligence trail.
Advanced Workflows
The following end-to-end prompt templates combine the patterns above into complete diligence workflows. Each can be used as a single agent prompt or broken into phases.Rapid Risk Audit
Scenario: A PE firm has one week to perform technical diligence on a target. The goal is to identify major risks and quantify technical debt for negotiation leverage. Timeline: 24–48 hours with CoreStory (vs. 2–4 weeks traditional).Post-Merger Integration Planning
Scenario: The deal is closing. The acquiring engineering team needs an integration blueprint — what overlaps, what conflicts, and where the friction will be.Security & PII Compliance Audit
Scenario: The diligence team needs to verify the target’s data handling practices for GDPR, CCPA, or other regulatory compliance before close.Prompting Patterns Reference
Investigation Patterns
Effective diligence queries are specific and evidence-oriented. They ask for file paths, concrete examples, and traceable findings — not summaries.| Pattern | Example |
|---|---|
| Enumerate with evidence | ”List all hardcoded secrets. For each, provide the file path, line context, and what it’s used for.” |
| Trace a flow | ”Map the data flow for customer onboarding from signup through account creation. Show every service, database write, and external API call.” |
| Compare to standard | ”How is authentication handled? Compare the implementation to standard OAuth 2.0 patterns and identify deviations.” |
| Assess coverage | ”What test coverage exists for the payment processing module? Identify critical paths with no test coverage.” |
| Find patterns | ”Identify all locations where database queries are constructed from user input. Flag any that don’t use parameterized queries.” |
| Quantify scope | ”How many external API integrations exist? For each, identify the provider, what data is exchanged, and whether there are retry/fallback mechanisms.” |
Query Specificity
Vague queries produce vague answers. Always include specific anchors:| Instead of | Use |
|---|---|
| ”Tell me about security" | "Identify all code locations that handle authentication tokens and describe how they’re stored, transmitted, and rotated" |
| "How’s the code quality?" | "Identify modules with circular dependencies, god classes over 500 lines, or methods with cyclomatic complexity above 15" |
| "Are there any risks?" | "List all third-party dependencies with known CVEs, sorted by severity, with the file that imports each one" |
| "How does data flow?" | "Trace the PII data flow for the customer registration process from HTTP request through database write, identifying encryption at each stage” |
Multi-Query Threading
For thorough coverage, chain queries within a conversation thread. Eachsend_message builds on prior context:
Best Practices
Create separate threads per workstream. Don’t mix security audit queries with integration planning queries. Separate conversations keep findings organized and make it easier to hand off specific workstreams to different team members. Start with the synthesized specs. Useget_project_techspec and get_project_prd before diving into send_message queries. The specs give you architectural vocabulary — service names, data model names, API endpoint patterns — that make your queries far more specific and productive.
Ask for file paths, always. Every finding in a diligence report needs evidence. Train your queries to always request file paths and line context. “Identify X and provide the file path” should be your default pattern.
Cross-reference critical findings. CoreStory’s analysis is based on its ingestion snapshot. For findings that materially affect deal terms — licensing poison pills, PII exposure, critical security vulnerabilities — always validate against the current source.
Use conversation rename for audit trail. Rename completed threads with a “RESOLVED” prefix using rename_conversation. This creates a searchable record that survives team handoffs and can be referenced months later during post-merger integration.
Scope your queries to avoid noise. A query like “find all security issues” will return an overwhelming response. Break it into targeted categories: hardcoded secrets, authentication flow, input validation, dependency vulnerabilities. Each produces focused, actionable findings.
Agent Implementation Guides
Claude Code
Setup
- Configure the CoreStory MCP server in your Claude Code settings (see CoreStory MCP Server Setup Guide).
-
Add the skill file. Claude Code uses skills (
.claude/skills/directory) as its preferred mechanism for teaching Claude specialized workflows. Create the skill:
.claude/skills/ma-due-diligence/SKILL.md with the content from the skill file below.
- (Optional) Add the slash command. Slash commands provide a shortcut to invoke the workflow:
.claude/commands/ma-due-diligence.md with a short description referencing the four-phase diligence workflow.
- Commit to version control for team sharing:
Usage
The skill activates automatically when Claude Code detects diligence-related requests:Tips
- Skills auto-load from directories added via
--add-dir, so team-shared skills work across machines. - Claude Code detects file changes during sessions — you can edit the skill file and it takes effect immediately.
- Keep the SKILL.md under 500 lines for reliable loading.
- Create separate CoreStory conversations per risk domain (security, licensing, PII) to keep findings organized.
Skill File
Save as.claude/skills/ma-due-diligence/SKILL.md:
GitHub Copilot
Setup
- Configure the CoreStory MCP server in your VS Code settings. Add it to your MCP server configuration (typically in VS Code settings JSON or the MCP configuration UI).
-
Add custom instructions. Copilot reads project-level instructions from
.github/copilot-instructions.md. This is the primary mechanism for teaching Copilot specialized workflows:
.github/copilot-instructions.md with the content from the custom instructions file below.
-
(Optional) Add a reusable prompt file. Prompt files (
.github/prompts/ma-due-diligence.prompt.md) provide reusable task templates. See the prompt file below. - Commit to version control:
Usage
In Copilot Chat (agent mode), natural language triggers the workflow:Tips
- Copilot’s agent mode (available in VS Code) can execute terminal commands and edit files autonomously — this workflow works best in agent mode.
- You can add path-specific instruction files (e.g.,
.github/instructions/diligence.instructions.mdwithapplyTo: "**") for project-wide guidance. - On Team/Enterprise plans, organization-level instructions apply across all repositories.
- Copilot automatically references
.github/copilot-instructions.mdin chat responses.
Custom Instructions
Save as.github/copilot-instructions.md:
Prompt File (Optional)
Save as.github/prompts/ma-due-diligence.prompt.md:
Cursor
Setup
- Configure the CoreStory MCP server in Cursor’s MCP settings (Settings → MCP Servers, or edit the MCP config JSON directly).
-
Add project rules. Cursor uses rules in
.cursor/rules/directories. Each rule folder contains aRULE.mdfile:
.cursor/rules/ma-due-diligence/RULE.md with the content from the rule file below.
- Commit to version control:
Usage
In Cursor’s Composer or Chat, the rule activates automatically for diligence-related requests:Tips
- Rules with
alwaysApply: trueload in every session. Set this if your team regularly performs diligence. Otherwise, usealwaysApply: falsewith a gooddescriptionso Cursor loads it intelligently when relevant. - The legacy
.cursorrulesfile still works but the.cursor/rules/directory structure is the current recommended approach. - Rules apply to Composer and Chat but do not affect Cursor Tab or inline edits (Cmd/Ctrl+K).
- On Team/Enterprise plans, team rules apply across all members.
Project Rule
Save as.cursor/rules/ma-due-diligence/RULE.md:
Factory.ai
Setup
-
Configure the CoreStory MCP server in your Factory.ai environment. Verify with the
/mcpcommand that CoreStory tools are accessible. -
Add the custom droid. Factory.ai uses droids stored in
.factory/droids/(project-level) or~/.factory/droids/(personal):
.factory/droids/ma-due-diligence.md with the content from the droid file below.
- Commit to version control (for project-level droids):
Usage
Invoke the droid via the Task tool:Tips
- Use
model: inheritin the YAML frontmatter to use whatever model the session is configured with. - The
toolsfield in frontmatter explicitly lists required MCP tools — this restricts the droid to only the CoreStory tools needed for diligence. - The Task tool that invokes droids requires experimental features to be enabled.
- For thorough diligence, the droid’s CoreStory queries may produce long streaming responses — this is expected.
Custom Droid
Save as.factory/droids/ma-due-diligence.md:
Troubleshooting
CoreStory gives generic or shallow answers Your queries are too broad. Include specific anchors — service names, module names, technology names, or code patterns. After reviewing the Technical Specification, use the vocabulary it provides (component names, data model names) in your queries. Response exceeds token limit Break large queries into smaller scopes. Instead of “Tell me everything about the data layer,” ask about specific data flows or specific models. If a response is truncated, ask the agent to continue or narrow the scope. Project not found or unavailable Verify the project has completed ingestion by callinglist_projects and checking the status. If the project shows as in-progress, wait for ingestion to complete before starting diligence queries. If the project doesn’t appear at all, confirm the MCP token has access to the correct organization.
Findings don’t match current source code
CoreStory’s analysis reflects the codebase at ingestion time. If the target has pushed significant changes since ingestion, request a re-ingestion before finalizing your diligence report. Always note the ingestion date in your report for traceability.
Agent can’t access CoreStory tools
See the Supercharging AI Agents troubleshooting section for MCP connection issues.
What’s Next
- For agent setup and configuration: Supercharging AI Agents with CoreStory
- For ongoing development workflows: Agentic Bug Resolution, Feature Implementation, Spec-Driven Development
- For MCP server reference: CoreStory MCP Server Setup & Usage Guide