Overview
Feature Implementation is a workflow for building new features, enhancements, and general development work using an AI coding agent backed by CoreStory’s code intelligence. The agent uses CoreStory’s MCP tools to understand your system’s architecture, identify implementation patterns, and navigate to the right code locations before writing a single line — then implements the feature using a test-driven approach. CoreStory serves two roles in this workflow:- Oracle — answers questions about system architecture, design patterns, naming conventions, and invariants. The agent queries CoreStory to understand how the system works and what patterns to follow before coding.
- Navigator — points to specific files, extension points, base classes, and data structures. The agent queries CoreStory to understand where to implement and what to reuse.
Prerequisites
- CoreStory account with at least one project that has completed ingestion
- CoreStory MCP server connected to your AI coding agent (see the CoreStory MCP Server Setup Guide)
- AI coding agent — this playbook includes implementation guides for Claude Code, GitHub Copilot, Cursor, and Factory.ai. The generic workflow applies to any MCP-capable agent.
- Optional: Ticketing system MCP (GitHub Issues, Jira, Linear, Azure DevOps) for automatic ticket intake
How It Works
The workflow has six phases. Each phase builds on the previous one. Skipping phases — especially the oracle and navigator phases — increases the risk of architectural misalignment, missed patterns, and avoidable rework.| Phase | Name | Purpose | CoreStory Role |
|---|---|---|---|
| 1 | Ticket Intake | Gather requirements, select project, create conversation | Setup |
| 2 | Oracle | Understand architecture, patterns, conventions, invariants | Oracle |
| 3 | Navigator | Map requirements to files, data structures, extension points | Navigator |
| 4 | TDD Implementation | Write failing tests from acceptance criteria, then implement | Validation |
| 5 | Feature Completion | Edge cases, regression check, performance/security validation | Validation |
| 6 | Completion | Commit, update ticket, preserve conversation | Knowledge capture |
| Tool | When Used |
|---|---|
list_projects | Phase 1 — select the right project |
create_conversation | Phase 1 — create a persistent investigation thread |
send_message | Phases 2–5 — all queries to CoreStory |
rename_conversation | Phase 6 — mark conversation as completed |
Step-by-Step Walkthrough
Phase 1: Ticket Intake & Context Gathering
Objective: Import ticket details and set up the CoreStory implementation environment. 1.1 — Get ticket details. If a ticketing MCP is connected, the agent fetches the ticket directly:Phase 2: Understanding System Architecture (Oracle Phase)
Objective: Understand how the system works and where new code should integrate, before writing anything. Without this phase, the agent risks implementing features that don’t follow existing patterns, break architectural constraints, or duplicate functionality that already exists. 2.1 — Query system architecture. Ask CoreStory about the feature area:{Format}ExportService), async/sync patterns, error handling approaches, and critical invariants (e.g., “all exports require authentication”).
2.3 — Query historical context.
Phase 3: Implementation Planning (Navigator Phase)
Objective: Map feature requirements to specific code locations and an implementation strategy. 3.1 — Identify extension points.Phase 4: Test-First Implementation
Objective: Write failing tests that define the feature, then implement code to make them pass. Tests come before implementation code. This is non-negotiable in the workflow — it ensures acceptance criteria are codified, requirements are clearly understood, and regressions are caught immediately. 4.1 — Write acceptance tests from the criteria gathered in Phase 1, following the architecture patterns from Phase 2 and data structures from Phase 3:Phase 5: Feature Completion
Objective: Add edge case coverage, ensure quality, prevent regressions. 5.1 — Identify edge cases by querying CoreStory:Phase 6: Completion & Knowledge Capture
Objective: Close the loop — commit, document, and preserve knowledge. 6.1 — Update ticket (if ticketing MCP is connected). Post an implementation summary with files created/modified, pattern followed, test count, and coverage. 6.2 — Commit with rich context. The commit message should document the feature, implementation approach, architectural alignment, and testing scope:Agent Implementation Guides
Claude Code
Setup
1. Connect CoreStory MCP server. Run this in your terminal:- Jira: See our Jira Integration playbook for full setup
- GitHub Issues: GitHub MCP Server
- Azure DevOps: Azure DevOps MCP Server
- Linear: Linear MCP Server
.claude/skills/implement-feature/SKILL.md with the contents from the Skill File section below. Commit it to version control so the whole team gets it:
Usage
The skill activates automatically when Claude Code detects feature implementation requests:Tips
- Skills auto-load from directories added via
--add-dir, so team-shared skills work across machines. - Claude Code detects file changes during sessions — you can edit the skill file and it takes effect immediately.
- Keep the SKILL.md under 500 lines for reliable loading.
- The skill file includes structured output templates so Claude reports progress at each phase.
- Let it run. The workflow is designed for autonomous execution. Interrupting mid-phase breaks the chain of context.
- Provide good acceptance criteria. The quality of the agent’s output is directly proportional to the clarity of the input. Vague tickets produce vague implementations.
- Use the skill for systematic work, plain prompts for quick tasks. Not every feature needs the full six-phase treatment. A two-line config change doesn’t need a CoreStory investigation.
Skill File
Save as.claude/skills/implement-feature/SKILL.md:
GitHub Copilot
Setup
- Configure the CoreStory MCP server in your VS Code settings. Add it to your MCP server configuration (typically in VS Code settings JSON or the MCP configuration UI).
-
Add custom instructions. Copilot reads project-level instructions from
.github/copilot-instructions.md. This is the primary mechanism for teaching Copilot specialized workflows:
.github/copilot-instructions.md with the content from the custom instructions file below.
-
(Optional) Add a reusable prompt file. Prompt files (
.github/prompts/implement-feature.prompt.md) provide reusable task templates. See the prompt file below. - Commit to version control:
Usage
In Copilot Chat (agent mode), natural language triggers the workflow:Tips
- Copilot’s agent mode (available in VS Code) can execute terminal commands and edit files autonomously — this workflow works best in agent mode.
- You can add path-specific instruction files (e.g.,
.github/instructions/backend.instructions.mdwithapplyTo: "src/backend/**") for component-specific guidance. - On Team/Enterprise plans, organization-level instructions apply across all repositories.
- Copilot automatically references
.github/copilot-instructions.mdin chat responses.
Custom Instructions
Save as.github/copilot-instructions.md:
Prompt File (Optional)
Save as.github/prompts/implement-feature.prompt.md:
Cursor
Setup
- Configure the CoreStory MCP server in Cursor’s MCP settings (Settings → MCP Servers, or edit the MCP config JSON directly).
-
Add project rules. Cursor uses rules in
.cursor/rules/directories. Each rule folder contains aRULE.mdfile:
.cursor/rules/implement-feature/RULE.md with the content from the rule file below.
- Commit to version control:
Usage
In Cursor’s Composer or Chat, the rule activates automatically for feature-related requests:Tips
- Rules with
alwaysApply: trueload in every session. Set this if your team regularly implements features through Cursor. Otherwise, usealwaysApply: falsewith a gooddescriptionso Cursor loads it intelligently when relevant. - The legacy
.cursorrulesfile still works but the.cursor/rules/directory structure is the current recommended approach. - Rules apply to Composer and Chat but do not affect Cursor Tab or inline edits (Cmd/Ctrl+K).
- On Team/Enterprise plans, team rules apply across all members.
Project Rule
Save as.cursor/rules/implement-feature/RULE.md:
Factory.ai
Setup
-
Configure the CoreStory MCP server in your Factory.ai environment. Verify with the
/mcpcommand that CoreStory tools are accessible. -
Add the custom droid. Factory.ai uses droids stored in
.factory/droids/(project-level) or~/.factory/droids/(personal):
.factory/droids/implement-feature.md with the content from the droid file below.
- Commit to version control (for project-level droids):
Usage
Invoke the droid via the Task tool:Tips
- Use
model: inheritin the YAML frontmatter to use whatever model the session is configured with. - The
toolsfield in frontmatter can explicitly list required MCP tools if you want to restrict the droid’s capabilities. - The Task tool that invokes droids requires experimental features to be enabled.
- For complex features, the droid’s CoreStory queries may produce long streaming responses — this is expected.
Custom Droid
Save as.factory/droids/implement-feature.md:
Tips & Best Practices
Start with the oracle, not the editor. The most common mistake is jumping straight to implementation. Even experienced developers benefit from the oracle phase — CoreStory often surfaces patterns, utilities, and conventions that aren’t obvious from reading code. One conversation per ticket. Don’t reuse CoreStory conversations across unrelated tickets. Each conversation builds a coherent context thread. Mixing topics dilutes the quality of responses. Test at multiple levels. Follow the testing pyramid: many unit tests (fast, isolated), some integration tests, few end-to-end tests. The acceptance tests from Phase 4 are typically integration-level; supplement with unit tests for individual components. Each test should verify one behavior. Resist the temptation to test multiple acceptance criteria in a single test function. Isolated tests are easier to debug when they fail. Name tests descriptively. Pattern:test_[feature]_[scenario]_[expected_outcome]. When a test fails six months later, the name should tell someone what broke without reading the test body.
Validate with CoreStory at key transitions. Query CoreStory after writing tests (are they comprehensive?) and after implementing (does this align?). These validation checkpoints catch misalignment early.
Feature flags for gradual rollout. If the feature warrants a gradual rollout, ask CoreStory about existing feature flag patterns in the codebase and implement accordingly.
Security-sensitive features deserve extra scrutiny. If the feature touches authentication, authorization, or sensitive data, add a dedicated CoreStory query: “What security considerations apply to [feature]? What auth patterns should I follow?” Then add security-specific tests for authentication, authorization, and input sanitization.