🧠 Vision Blueprint¶
🧭 What is a Vision Blueprint?¶
A Vision Blueprint is the foundational artifact produced by the 🧠 Vision Architect Agent in the ConnectSoft AI Software Factory.
It formalizes human input — ideas, problems, business goals — into a structured, agent-ready, memory-traceable specification for what a software system should do.
In the Factory, this is where intent becomes plan — before architecture, before code, before tickets.
🎯 Why It Exists¶
Many software projects begin with vague input: “We want a platform for X,” or “We need an app that does Y.”
The Vision Blueprint translates that input into a first-class blueprint, rich with:
- Problem definitions
- Business and user opportunities
- Draft solution framing
- Stakeholders and success metrics
- Early feature set decomposition
- Domain-specific signals and metadata
It serves as the starting point for all downstream generation agents, including Product, Architecture, QA, and DevOps clusters.
🏭 Its Role in the AI Software Factory¶
The Vision Blueprint is created during the first wave of orchestration, as part of the Vision and Planning Cluster:
flowchart TD
HumanInput["🧍 Human Prompt / Problem Statement"]
VisionArchitect["🧠 Vision Architect Agent"]
VisionBlueprint["📘 Vision Blueprint"]
ProductManager["👩💼 Product Manager Agent"]
EnterpriseArchitect["🏛️ Enterprise Architect Agent"]
HumanInput --> VisionArchitect
VisionArchitect --> VisionBlueprint
VisionBlueprint --> ProductManager
VisionBlueprint --> EnterpriseArchitect
````
> 📌 The Vision Blueprint triggers downstream planning, decomposition, scoping, and architecture generation.
---
### 🧠 Agent-Created, Human-Aware
* **Generated by:** `Vision Architect Agent`
* **Reviewed by:** Product Manager (optional)
* **Consumed by:** Product Owner, Architects, QA, Orchestrators
* **Stored in:** Memory graph, Blob storage, and trace-indexed observability layer
---
### 📦 Output Shape
Every Vision Blueprint is generated in:
* `📘 Markdown` — rich human-readable form (Studio-ready)
* `🧾 JSON` — machine-readable form (used by downstream agents)
* `🧠 Embedded form` — semantically embedded for vector search and context injection
---
### 📁 File Naming Convention
```bash
blueprints/vision/{project-name}/vision-blueprint.md
blueprints/vision/{project-name}/vision-blueprint.json
Example:
🔁 Regeneration and Correction¶
- Regenerable with new prompt or signal
- Automatically corrected if missing critical blocks
- Versioned (
v1,v2, etc.) and tagged (draft,validated,approved)
🧩 Summary¶
| Property | Value |
|---|---|
| 📄 Format | Markdown + JSON |
| 🧠 Generated by | Vision Architect Agent |
| 🧭 Purpose | Transform ideas into structured, validated software vision |
| 🧱 Consumed by | Product, Architecture, QA, DevOps |
| 🧠 Memory Shape | Vector + Trace ID + Project Scope |
| 📈 Observability Tags | traceId, agentId, originPrompt, confidenceScore |
📐 Position in the Factory Workflow¶
The Vision Blueprint is the first traceable artifact in the AI Software Factory lifecycle — it anchors the entire generation process by capturing business context, system intent, and product framing.
Every module, every microservice, every document — starts from a Vision.
🏗️ Where It Fits in the Factory¶
The Vision Blueprint is created during the Vision and Planning phase of the orchestration pipeline.
flowchart TD
Trigger["🟢 New Prompt / Product Request"]
VisionAgent["🧠 Vision Architect Agent"]
VisionDoc["📘 Vision Blueprint"]
Planner["📌 Planner Agent"]
ProductMgr["👩💼 Product Manager Agent"]
Arch["🏛️ Enterprise Architect Agent"]
Owner["🧑💻 Product Owner Agent"]
Trigger --> VisionAgent
VisionAgent --> VisionDoc
VisionDoc --> Planner
VisionDoc --> ProductMgr
VisionDoc --> Arch
VisionDoc --> Owner
````
---
### 🔄 What Triggers It
The blueprint is generated in response to:
| Trigger Type | Description |
| ------------------------------- | ----------------------------------------------------------- |
| `📝 PromptInputReceived` | A new product or feature request enters via Studio or CLI |
| `📂 BlueprintRequested:vision` | A planner explicitly requests a new vision document |
| `🧠 Orchestration Plan Started` | A multi-agent plan begins and requires vision as foundation |
---
### 🧠 Who Consumes It
| Agent / Role | Purpose |
| ---------------------------- | ------------------------------------------------------ |
| `Product Manager Agent` | Derives features, epics, personas, and value alignment |
| `Enterprise Architect Agent` | Translates vision into architectural domains |
| `Product Owner Agent` | Breaks it into backlog scope and iterations |
| `QA/Test Generator Agent` | Generates feature specs and BDDs from goals |
| `Studio` (human) | Displays Vision Blueprint, history, trace, and edits |
---
### 🧠 How It’s Referenced
Every artifact downstream maintains traceability back to its originating vision via:
| Field | Value Example |
| --------------- | ----------------------------------------- |
| `traceId` | `trace-vision-2025-06-08-a9fd` |
| `blueprintType` | `vision` |
| `originPrompt` | “Build a system to improve vet referrals” |
| `projectId` | `vetspire-referrals` |
| `version` | `v1` |
This allows validation, debugging, and rollback across the factory.
---
### 🧠 Memory & Storage Locations
| Layer | Role |
| ----------------- | ---------------------------------------- |
| `🧠 Vector DB` | Embeds semantic content for agent recall |
| `📦 Blob Storage` | Stores canonical `.md` and `.json` files |
| `📊 Metadata DB` | Stores version, traceId, confidence, etc |
---
### 📎 Downstream Propagation
The Vision Blueprint triggers creation of:
* `📦 Agentic SaaS Product Blueprint`
* `📄 Module Blueprints` (via decomposition)
* `📋 Backlog (Epics, Features, Stories)`
* `📜 Initial Test Plans`
* `🧪 Quality Criteria`
* `🧠 Planning Memory Node`
---
### ✅ Summary
| Role | Detail |
| ---------------------------- | --------------------------------------------------------------------- |
| 📥 Triggered by | Prompt input, planner orchestration, or manual request |
| 🧠 Created by | Vision Architect Agent |
| 🔗 Consumed by | Planning, Architecture, QA, Product Owner agents |
| 🧠 Stored in | Vector DB, Blob Storage, Metadata Graph |
| 📊 Used for | Planning, trace validation, regeneration, agent context injection |
| 🔁 Traceability Backbone For | All downstream blueprints, epics, tickets, services, tests, pipelines |
---
## 🧱 Structural Overview of the Vision Blueprint
The **Vision Blueprint** follows a **strict, modular structure**, enabling both human understanding and agentic processing.
Each section is designed to be independently validated, embedded, and consumed by specialized downstream agents.
> This structure balances creativity (freeform prompts) with **rigor** (structured, traceable blocks).
---
### 🧩 Canonical Sections
| Section | Description |
|-----------------------------|-----------------------------------------------------------------------------|
| `🧠 Problem Definition` | Captures the pain points, inefficiencies, or gaps in the current state |
| `💡 Opportunity Framing` | Business or user opportunities that make the idea viable and valuable |
| `🧪 Proposed Solution` | Conceptual outline of the envisioned system or platform |
| `📦 Software Type` | Classification: SaaS, API, App, Agentic System, CLI, etc. |
| `🧱 Initial Feature Set` | Decomposed feature blocks or capabilities expected in early MVP |
| `🧍 Stakeholder Personas` | User types, admins, external systems interacting with the solution |
| `🎯 Success Metrics` | Measurable indicators for success: adoption, ROI, time savings, etc. |
| `📎 Constraints & Context` | Optional section for regulatory, compliance, or environmental constraints |
| `🧠 Metadata & Traceability` | Fields like `traceId`, `domain`, `priority`, `confidenceScore` |
---
### 📘 Markdown Structure Example
````markdown
# Vision Blueprint
## 🧠 Problem Definition
...
## 💡 Opportunity Framing
...
## 🧪 Proposed Solution
...
## 📦 Software Type
...
## 🧱 Initial Feature Set
...
## 🧍 Stakeholder Personas
...
## 🎯 Success Metrics
...
## 📎 Constraints & Context
...
## 🧠 Metadata & Traceability
...
````
---
### 🧾 JSON Schema (Simplified Example)
```json
{
"traceId": "vision-2025-06-08-vetspire-referrals",
"problem": "...",
"opportunity": "...",
"solution": "...",
"softwareType": "SaaS",
"features": ["Referral Management", "Tracking Dashboard"],
"personas": ["Vet", "Clinic Admin", "Referral Coordinator"],
"successMetrics": ["Reduce referral leakage by 30%"],
"constraints": ["HIPAA Compliant"],
"originPrompt": "We want to reduce referral leakage...",
"confidenceScore": 0.92
}
🔁 Why This Structure Matters¶
| Benefit | Enabled Capability |
|---|---|
| Modular Sections | Agents can consume only what they need |
| Traceable Blocks | Every block has a memory scope and lineage trace |
| Dual Format (MD + JSON) | Human readability + machine interoperability |
| AI-Ready Schema | Embedding, filtering, and scoring supported out of the box |
| Versionable + Comparable | Enables blueprint diffs, regressions, improvements |
🧱 Blueprint Validation Rules (high-level)¶
- ✅ Each required section must be non-empty
- ✅ Feature set must contain at least one capability
- ✅ Must include at least one persona and one success metric
- ✅ Must emit
traceId,originPrompt, andsoftwareType
✅ Summary¶
| Aspect | Value |
|---|---|
| 📘 Format | Markdown + JSON |
| 🧱 Sections | Problem, Opportunity, Solution, Type, Features, Personas, Metrics, Metadata |
| 🔄 Validation | Structural + semantic validation enforced at generation time |
| 🧠 AI Compatibility | Fully structured, embeddable, diffable, memory-traceable blueprint |
🧠 Problem Definition Block¶
The Problem Definition is the first and most critical section of the Vision Blueprint.
It captures the pain points, inefficiencies, user frustrations, or systemic gaps the envisioned software is meant to solve.
A good solution starts with a clear, well-scoped problem. Agents cannot decompose what they don’t understand.
📌 Purpose¶
This block answers:
- ❓ What pain or deficiency currently exists?
- 🧍 Who experiences this problem?
- 🌍 In what domain or context does the problem occur?
- 🔁 Why hasn’t this been solved yet (if known)?
📄 Recommended Format¶
Use structured, but natural language prose (2–5 paragraphs max).
Clearly describe the before state, without mixing in proposed solutions.
✅ Good example:
In multi-location veterinary practices, referrals to specialists are often handled manually via email, phone, or fax.
These fragmented workflows cause delays, tracking failures, and lead to 30–40% of referrals never being completed.
Clinics lack a centralized view of active referrals and cannot measure referral leakage or resolution timelines.
Existing EMR systems do not support referral lifecycle management across external clinics.
❌ Poor example (too vague or solution-mixed):
We want a platform to track referrals better and help admins communicate.
A dashboard and automated alerts would be good.
🧠 Agent Use¶
| Agent | Use of Problem Block |
|---|---|
Product Manager Agent |
Extracts pain points to map into features and stories |
Enterprise Architect Agent |
Identifies domain bounded contexts and external systems |
QA/Test Generator Agent |
Maps pain to test goals and negative scenarios |
Studio Validation Agent |
Flags vague or circular problem statements |
🧠 Memory Tags¶
| Field | Purpose |
|---|---|
problem_text |
Natural language full problem definition |
pain_points[] |
Structured list of key issues |
domain_tags[] |
e.g., veterinary, referrals, multi-location |
Memory embeddings store both the raw text and extracted pain point signals.
🔍 Optional Enhancements¶
- Include sample quotes from users (if prompt includes them)
- Include impact magnitude (e.g., "costs $500k/year", "affects 200 clinics")
🧪 Validation Criteria¶
| Check | Rule |
|---|---|
| ✅ Content Present | Must include at least 2 full sentences |
| ✅ User-Oriented | Focuses on end-user or operational pain |
| ❌ No Solution Leakage | Must not reference proposed components or features |
✅ Summary¶
| Element | Description |
|---|---|
| 📦 Section Name | Problem Definition |
| 🧠 Stored As | Markdown section + embedded problem_text |
| 🔄 Used By | Product, QA, Architecture agents |
| 🔍 Validation | Enforced during generation and human review |
💡 Opportunity Framing Block¶
🎯 Purpose¶
The Opportunity Framing section captures the positive, strategic potential unlocked by solving the defined problem. It helps ConnectSoft agents and humans understand the “why now”, “why us”, and “what value will this create” dimensions of the software idea.
Where the Problem Block describes pain, this block illuminates potential — value, efficiency, growth, and competitive edge.
📋 What It Should Include¶
This section should clearly answer:
- 📈 What strategic opportunity is being pursued?
- 🌍 What market, user group, or internal operation benefits?
- 🛠️ Why is software a good solution for this?
- ⚡ What change is expected if the problem is solved?
✅ Example (Well-Framed)¶
By centralizing referral workflows in veterinary clinics, we can reduce leakage, improve patient outcomes, and increase transparency. Clinics will gain visibility into active referrals, improve follow-up rates, and provide better continuity of care. This unlocks new service opportunities, increases client satisfaction, and improves operational performance across locations. A digital referral platform also enables future integration with specialist networks and third-party scheduling APIs.
❌ Example (Poorly Framed)¶
We can make the system better and faster. Everyone will like it. It will help the business.
🛑 Too vague — lacks specificity, user value, and measurable outcomes.
🔗 Linkage to Other Blocks¶
| Links To | Why |
|---|---|
Problem Definition |
Opportunity is the “flip side” of the problem |
Proposed Solution |
Solution should directly target the opportunity |
Success Metrics |
Quantifies the business value anticipated |
🧠 Memory Shape & Trace Fields¶
opportunity_text: Full prose text of the opportunityopportunity_tags[]: e.g.,efficiency,care_quality,revenue_growthexpected_benefits[]: Structured bullets extracted by agentdomain_alignment_score: Optional — correlation to known blueprints or sectors
🧠 Used By¶
| Agent | Purpose |
|---|---|
| Product Manager Agent | Aligns vision with product value hypothesis |
| Growth Strategist Agent | (Optional) Used to evaluate market readiness |
| QA/Test Agent | Extracts acceptance criteria and value-oriented tests |
| Planner Agent | Determines downstream blueprint priority |
📏 Validation Heuristics¶
- Must reference user or business benefit, not just technical factors
- Should highlight why now or why valuable
- Must avoid regurgitating the problem without expansion
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Opportunity Framing |
| 💡 Content Type | 2–3 paragraph prose, structured tags generated automatically |
| 🔁 Linked With | Problem, Solution, Metrics |
| 🧠 Stored As | Markdown + semantic embeddings + structured JSON keys |
| 📈 Observability | Emitted as part of trace + metrics stream during generation |
🧪 Proposed Solution Block¶
🧭 Purpose¶
The Proposed Solution section provides a conceptual overview of the system to be built — not a technical design, but a high-level functional framing of what the solution does, how it helps, and what makes it different.
It bridges the gap between vision and architecture — offering enough structure for planning, without constraining implementation.
🧩 What It Should Capture¶
- 🛠️ What kind of system is being proposed (platform, tool, agentic workflow, etc.)?
- 🧍 Who are the core actors or users?
- 🔄 What does the system enable them to do?
- 🧠 What conceptual components or flows exist (not microservices, just mental model)?
✅ Strong Example¶
The proposed system is a centralized referral management platform for veterinary practices. It enables vets, admins, and referral coordinators to create, send, track, and close referrals across internal and external clinics. Each referral has a lifecycle (initiated, sent, accepted, scheduled, resolved) and all parties have access to its real-time status. The system integrates with EMRs via adapters and exposes an admin dashboard with filtering, KPIs, and alerting. Future versions may support automated routing, AI-based referral classification, and integration with specialist networks.
❌ Weak Example¶
A tool to help with referrals. It will have a dashboard and send messages. It should be easy to use. 🛑 Too generic, unclear flows, vague functionality.
🧠 Memory & Semantics¶
| Field | Description |
|---|---|
solution_text |
Full prose description of the envisioned solution |
solution_keywords[] |
Extracted tags: referral_lifecycle, dashboard, etc. |
user_activities[] |
High-level actions: create_referral, track_status |
unique_value_proposition |
Optional field generated from the delta vs status quo |
🔄 Upstream/Downstream Relevance¶
| Related Block | Connection |
|---|---|
Problem |
Solution must target the specific pain points raised |
Opportunity |
Solution should unlock stated business/user outcomes |
Software Type |
Solution scope affects classification |
Initial Features |
Solution hints will be decomposed into modular features |
🤖 Agent Consumers¶
| Agent | Usage |
|---|---|
| Product Owner Agent | Breaks conceptual flows into actionable backlog items |
| Enterprise Architect Agent | Maps solution flows to system boundaries and services |
| Module Generator Agent | Begins blueprint generation for each major solution flow |
| Documentation Generator Agent | Uses solution text to create system summary and diagrams |
📏 Validation Rules¶
- ✅ Must describe at least 2–3 user flows or capabilities
- ✅ Must not reference specific libraries, protocols, or tech stacks
- ❌ Avoid using solution as a restatement of the problem
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Proposed Solution |
| 🧠 Format | Paragraph-based with extractable structured signals |
| 🔄 Critical For | Planning, architecture, backlog, module generation |
| 🧠 Stored As | Markdown block + semantic + activity signal embeddings |
| 🧩 System Framing Type | Conceptual, functional, user-centric |
📦 Software Type Classification¶
🎯 Purpose¶
This section defines the category of software being proposed, enabling the factory to:
- Route generation to appropriate templates, blueprints, and orchestrators
- Set up correct delivery pipelines (e.g., web app, CLI tool, API, SaaS)
- Align architecture patterns, observability, and deployment strategies
Without knowing what type of system it is, the Factory cannot scaffold correctly.
🧰 Supported Software Types¶
| Type | Description |
|---|---|
SaaS Platform |
Multi-tenant backend + admin + user UI, usually long-lived |
API-First System |
Primarily exposed via HTTP/gRPC API; UI optional |
Agentic Toolchain |
AI-agent-driven internal tool or automation layer |
Single-Page App |
Web app frontend focused, backed by lightweight APIs |
Mobile Application |
Android/iOS-focused application or companion app |
Background Processor |
Worker-based solution triggered by events, time, or queues |
CLI Tool |
Command-line utility for operational or developer use |
Integration Adapter |
A glue layer to connect external systems or APIs |
Embedded Module |
Part of a larger system, not a standalone product |
🧠 Selection Process¶
This classification is automatically inferred by the Vision Architect Agent using:
- Keywords in prompt (
SaaS,web app,automation,API, etc.) - Patterns in proposed features
- Domain alignment (e.g., fintech →
API, operations →toolchain) - Optional human override during review
✅ Example Classification¶
Prompt: “We want a platform for multi-location clinics to manage referrals” → Classified as:
SaaS PlatformPrompt: “Build an agent that syncs two systems based on schedule rules” → Classified as:
Agentic ToolchainandIntegration Adapter
🧠 Blueprint Routing Behavior¶
| Type | Routes to... |
|---|---|
SaaS Platform |
Triggers: Product Blueprint, Multi-Module Orchestrator |
API System |
Triggers: API Blueprint, Swagger/OpenAPI definition |
Agentic Toolchain |
Triggers: Agent Container + Semantic Kernel skill templates |
CLI Tool |
Triggers: Console App template + System.CommandLine CLI integrations |
📦 Output Format¶
{
"softwareType": "SaaS Platform",
"secondaryTypes": ["Agentic Toolchain"],
"classificationConfidence": 0.95
}
🧠 Memory Impact¶
| Signal | Description |
|---|---|
softwareType |
Primary category used for blueprint routing |
classificationTags[] |
Auxiliary signals (e.g., web, backend) |
templateVariants[] |
List of .NET new templates preselected |
📏 Validation Rules¶
- ✅ Must classify into at least one known type
- ✅ Confidence must be over 0.80 or fallback to human validation
- ❌ Avoid overly generic fallback (
“custom tool”)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Software Type |
| 🧠 Format | Enum + tags + confidence |
| 🔄 Affects | Template routing, orchestration plan, pipeline shape |
| 🧠 Used By | Planner, Blueprint Generator, Template Resolver |
🧱 Initial Feature Set¶
🎯 Purpose¶
The Initial Feature Set block defines the first-level capability breakdown of the proposed system. These features are not yet decomposed into user stories, but they represent the functional pillars of the envisioned product or service.
This is the raw material for the Product Manager and Engineering Agents to break into modules, services, and backlog items.
🔍 What a Feature Represents¶
Each feature is:
- A functional unit of value for the user or business
- Independently testable and observable
- Aligned with the solution and software type
- Defined as a “capability” — not as a UI or internal mechanism
✅ Strong Examples¶
For a veterinary referral SaaS system:
- 📨 Referral Lifecycle Management
- 📋 Multi-Clinic Tracking Dashboard
- 🔔 Automated Status Alerts
- 🔒 Secure External Sharing
- 📈 Referral Outcome Analytics
Each of these can become a standalone microservice, library, or UI module.
❌ Weak Examples¶
- “UI”
- “Database”
- “Authentication maybe” 🛑 Too vague, too low-level, or not actionable
📦 Feature Format Options¶
| Format | Description |
|---|---|
List of Features |
Flat list (min 3–5) |
Feature Tree |
Optional hierarchy (e.g., Core > Sub-Features) |
Tagged Features |
With tags like core, optional, future |
{
"features": [
{ "name": "Referral Lifecycle", "tags": ["core"] },
{ "name": "Alert Engine", "tags": ["optional"] },
{ "name": "Dashboard", "tags": ["core", "UI"] }
]
}
🧠 Used By¶
| Agent | Purpose |
|---|---|
Product Manager Agent |
Converts features into epics, user stories, MVP scope |
Enterprise Architect Agent |
Maps features to domains and modules |
Blueprint Generator Agent |
Builds module/service blueprints per feature |
Test Generator Agent |
Creates test matrices based on functional features |
📁 Output Location¶
Each feature may generate:
- 🟦 A folder in the blueprint hierarchy
- 🔵 A
FeatureDefinition.json(early structure) - 📘 A markdown block in the Vision Blueprint
## 🧱 Initial Feature Set
- Referral Lifecycle Management
- Tracking Dashboard
- Alert Engine
- Referral Outcome Analytics
- External Sharing & Collaboration
📏 Validation Rules¶
- ✅ At least 3 features recommended
- ✅ Must not be overly technical (no “API Gateway” unless exposed directly to user)
- ✅ Should be functional, user-visible, or business-value-driven
🧠 Memory Tags¶
| Signal | Usage |
|---|---|
features[] |
Feature name strings |
featureTags[] |
Semantic attributes (e.g., communication, core) |
featureToAgentMap |
(Internal) tracks feature → agent assignments |
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Initial Feature Set |
| 🧠 Format | List + optional tags or hierarchy |
| 🔄 Used For | Planning, modular decomposition, test and blueprint triggers |
| 📊 Stored As | Markdown, JSON, embedded signals |
🧍 Stakeholder Personas¶
🎯 Purpose¶
This section identifies the people and roles who will interact with, benefit from, or be affected by the proposed software system. These personas serve as the anchor for all planning, user story generation, testing scenarios, and UX design.
Without clear personas, there is no user context — and no meaningful product.
👤 What Counts as a Persona?¶
A persona can be:
- A user role (e.g., “Clinic Admin”, “Referral Coordinator”)
- A business stakeholder (e.g., “Operations Director”)
- An external system or integration point (if it behaves like a user)
- A non-human actor (e.g., “Scheduling Bot”, “Data Consumer Service”)
✅ Example¶
For a veterinary SaaS platform:
- 👩⚕️ Veterinarian – Creates and sends referrals
- 🧑💼 Clinic Admin – Manages inbound/outbound referrals
- 🧠 Referral Coordinator – Tracks follow-up, documents progress
- 🧍♂️ Client – Receives automated communications (indirect persona)
- ⚙️ External Specialist System – Receives and responds to referrals
❌ Bad Examples¶
- “Everyone”
- “All users”
- “Admin, maybe?”
🛑 Too generic, not actionable or mappable to flows
📋 Persona Format¶
{
"personas": [
{
"name": "Clinic Admin",
"description": "Oversees referrals across locations, tracks completion."
},
{
"name": "Veterinarian",
"description": "Initiates referral process and provides medical justification."
}
]
}
Markdown example:
## 🧍 Stakeholder Personas
- **Veterinarian** – Initiates referral creation
- **Clinic Admin** – Manages referrals and analytics
- **Referral Coordinator** – Ensures completion and follow-up
- **Client** – Receives referral updates
- **Specialist System** – External system handling incoming referrals
🧠 Memory & Metadata¶
| Field | Description |
|---|---|
personas[] |
List of named roles |
personaMap |
Description and purpose for each |
personaTags[] |
e.g., human, system, internal, external |
🤖 Agent Use¶
| Agent | Usage |
|---|---|
| Product Manager Agent | Derives user stories and flows per persona |
| Test Generator Agent | Builds scenario-based and persona-driven tests |
| UX Designer Agent | Designs interfaces and workflows per persona profile |
| Documentation Generator | Includes persona mapping in system overview documents |
📏 Validation Rules¶
- ✅ At least one persona must be defined
- ✅ Prefer roles over job titles
- ✅ Must describe interaction or impact, not just name
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Stakeholder Personas |
| 🧠 Format | Name + description |
| 🔄 Used By | Planning, test generation, UX, architecture, documentation |
| 📊 Memory Tags | Role, type, impact, context scope |
🎯 Success Metrics¶
🧭 Purpose¶
This section defines the measurable outcomes that determine whether the software fulfills its vision and justifies its existence. These metrics align the factory output with business goals, user value, and operational efficiency.
Without metrics, agents cannot optimize, test, or validate what “success” means for the system.
🎯 What Counts as a Success Metric?¶
Each metric should be:
- 📏 Measurable (quantitative or observable)
- ⏱️ Time-bound (explicit or implicit horizon)
- 📈 Outcome-focused, not implementation-focused
- 🧩 Mapped to a stakeholder or value proposition
✅ Examples¶
- 📉 Reduce referral leakage by 30% within 6 months
- 📈 Increase referral completion rate to 85%+
- ⏱️ Decrease average referral resolution time from 7 to 3 days
- 🧾 Enable generation of monthly analytics reports across all locations
- 📬 Achieve 95%+ referral status update rate for clients
❌ Poor Examples¶
- “Make system faster”
- “Good UX”
- “Help admins”
🛑 Too vague, not measurable, lacks time/context
🧠 Format Structure¶
{
"successMetrics": [
{
"goal": "Reduce referral leakage",
"target": "30%",
"timeframe": "6 months"
},
{
"goal": "Improve client communication",
"target": "95% delivery success",
"scope": "SMS/email notifications"
}
]
}
Markdown example:
## 🎯 Success Metrics
- Reduce referral leakage by 30% within 6 months
- Achieve 85% referral completion rate
- Enable location-wide referral outcome reporting
- Increase automated alert delivery to 95%+ success
- Shorten average referral lifecycle by 50%
🧠 Memory Representation¶
| Field | Purpose |
|---|---|
successMetrics[] |
List of structured goal → target → timeframe triples |
metricTags[] |
Signals like efficiency, growth, retention |
stakeholderMapping |
Persona → Metric alignment |
🤖 Agent Usage¶
| Agent | Purpose |
|---|---|
| QA/Test Generator Agent | Builds metric-aligned quality gates and test plans |
| Product Owner Agent | Uses metrics to define MVP acceptance criteria |
| Release Manager Agent | Attaches metrics to version readiness evaluations |
| Observability Agent | Configures dashboards and alerts per metric target |
📏 Validation Rules¶
- ✅ Minimum 2 metrics recommended
- ✅ Must be expressed in measurable terms (%, time, count, rate, etc.)
- ✅ Optional: should align with a specific persona or feature
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Success Metrics |
| 🧠 Format | Bulleted list + structured representation |
| 🔄 Used For | QA, test coverage, release criteria, goal traceability |
| 📊 Memory Tags | Embedded metrics, domain alignment, impact scoring |
📎 Constraints & Compliance¶
🧭 Purpose¶
This section captures any limitations, boundaries, or mandatory external requirements that influence the design, implementation, or operation of the system. These can include regulatory constraints, technical boundaries, organizational rules, or integration constraints.
Constraints prevent the Factory from generating invalid or non-compliant outputs. They guide scope, tech stack, and deployment decisions.
🧷 Types of Constraints¶
| Category | Examples |
|---|---|
| 🛡️ Regulatory | HIPAA, GDPR, SOC2, ISO 27001 |
| 🔌 Technical | “Must integrate with legacy EMR”, “Must support offline mode” |
| 🧠 Organizational | “No PII stored outside region”, “Must use Azure stack” |
| 🧮 Financial | “Cap cost per tenant to $50/month”, “Open-source libraries only” |
| 🔐 Security | MFA required, SSO-only, encryption at rest & transit |
✅ Examples¶
- Must be HIPAA-compliant in all data flows
- System must operate fully within Azure US region
- Only integrations with existing EMRs are allowed — no system replacement
- Client and referral data must be encrypted at rest (AES-256)
- Referral data must be retained for 7 years per regulation
❌ Poor Examples¶
- “Be secure”
- “Maybe follow GDPR”
- “Needs to work everywhere”
🛑 Too vague, lacks enforceability or clarity
📋 Output Format¶
{
"constraints": [
{
"type": "Regulatory",
"requirement": "HIPAA-compliant data storage and transmission"
},
{
"type": "Infrastructure",
"requirement": "Azure-hosted only, must use existing VNet"
}
]
}
Markdown representation:
## 📎 Constraints & Compliance
- HIPAA compliance for all patient-related data
- System must be hosted in Azure (East US 2)
- All external communications must be encrypted (TLS 1.2+)
- Must integrate with existing EMR systems via adapters
- Data retention: minimum 7 years, exportable on demand
🧠 Memory and Tagging¶
| Field | Purpose |
|---|---|
constraints[] |
List of constraint records |
constraintTags[] |
e.g., regulatory, tech-boundary, security |
complianceScore |
(Optional) Internal field for trace confidence |
🤖 Agent Impact¶
| Agent | Usage |
|---|---|
| Enterprise Architect Agent | Applies compliance patterns and selects appropriate blueprints |
| DevOps Agent | Enforces hosting, deployment, and data policies |
| Security Advisor Agent | Verifies encryption, access control, and identity flows |
| QA Agent | Generates compliance test cases |
📏 Validation Rules¶
- ✅ Must use structured descriptions (no freeform “legal” paragraphs)
- ✅ Should identify at least 1 regulatory or environmental constraint
- ❌ Cannot conflict with opportunity or proposed solution
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Constraints & Compliance |
| 🔒 Scope | Regulatory, technical, financial, operational |
| 🧠 Format | Markdown + structured JSON |
| 🔄 Used For | Architecture shaping, blueprint restrictions, test generation |
| 🧠 Stored As | Indexed trace signals with constraint-aware tagging |
🧠 Metadata & Traceability¶
🧭 Purpose¶
This block encodes the machine-traceable metadata required for orchestration, observability, search, and memory retrieval across the AI Software Factory. It enables every artifact—from features to tests—to reference its origin in the vision, ensuring lineage, explainability, and context reuse.
Traceability is how the Factory stays transparent, auditable, and modular at scale.
📋 What It Includes¶
| Field Name | Description |
|---|---|
traceId |
Unique run identifier for the Vision Blueprint |
projectId |
ID or slug for the software project (e.g., vetspire-referrals) |
originPrompt |
The original prompt or opportunity text that triggered the generation |
version |
Incremental version (v1, v2, etc.) |
createdAt |
ISO 8601 timestamp of creation |
agentId |
vision-architect or other creator agent |
confidenceScore |
Optional – confidence level for accuracy, completeness, or agent agreement |
tags[] |
Domain-specific tags (vet, referrals, workflow, compliance) |
✅ Example¶
{
"traceId": "trace-2025-06-08-vetspire-v1",
"projectId": "vetspire-referrals",
"originPrompt": "We want to reduce referral leakage in veterinary clinics.",
"version": "v1",
"createdAt": "2025-06-08T14:22:01Z",
"agentId": "vision-architect",
"confidenceScore": 0.91,
"tags": ["veterinary", "referrals", "workflow", "HIPAA"]
}
Markdown format (bottom of blueprint file):
---
## 🧠 Metadata & Traceability
- **traceId**: `trace-2025-06-08-vetspire-v1`
- **projectId**: `vetspire-referrals`
- **originPrompt**: “We want to reduce referral leakage in veterinary clinics.”
- **version**: `v1`
- **createdAt**: `2025-06-08T14:22:01Z`
- **agentId**: `vision-architect`
- **confidenceScore**: 0.91
- **tags**: `veterinary`, `referrals`, `workflow`, `HIPAA`
📌 Uses Across the Factory¶
| Consumer | Usage |
|---|---|
| All downstream agents | Inject this metadata into features, stories, modules, tests, etc. |
| Studio UI | Provides trace visualization and lineage inspection |
| Observability Layer | Links events, metrics, and logs back to blueprint origin |
| Memory Indexer | Uses tags and traceId for semantic search and retrieval |
| Retry & Feedback Loops | Aligns corrections to the exact blueprint and version |
🧠 Stored In¶
Vector DB→ Embedded as metadata nodeBlob Storage→ Stored in.mdand.jsonmanifestMetadata DB→ Indexed for trace, tags, and project lookupObservability Bus→ Emitted onVisionDocumentCreatedevent
📏 Validation Rules¶
- ✅ Must include all core fields (
traceId,projectId,originPrompt,version) - ✅ Must include at least 2 tags
- ✅ Optional:
confidenceScore, but encouraged for trace quality metrics
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Metadata & Traceability |
| 🔁 Used By | All agents, orchestrators, Studio, memory, observability |
| 📘 Format | JSON block + markdown footer |
| 🧠 Purpose | Lineage, observability, audit, search, trace-to-impact links |
| 🔄 Enables | Memory linking, regeneration control, retry/correction routing |
📘 Output Formats (Markdown, JSON, Embedded)¶
🧭 Purpose¶
The Vision Blueprint is always emitted in three output formats, each optimized for a specific consumer in the ConnectSoft AI Software Factory:
| Format | Purpose | Consumed By |
|---|---|---|
Markdown |
Human-readable, structured documentation | Studio, reviewers, doc generators |
JSON |
Machine-readable schema for agent parsing | All downstream agents and orchestrators |
Embedded |
Vectorized semantic memory chunk | Context injectors, planners, search |
This multi-format model enables dual-readability: humans and machines both treat the blueprint as a first-class input.
📘 Markdown Output¶
- Rendered as
.mdfile (Studio preview-ready) - Sections include headers (
##) for anchor links and validation - Includes all canonical blocks: Problem, Opportunity, Features, etc.
- Ends with metadata block (traceId, tags, version)
📁 Example path:
🧾 JSON Output¶
- Mirrors the markdown, but structured in key-value and list format
- Used by all planning, generation, and decomposition agents
- Ensures validation schemas and agent-specific input slicing
📁 Example path:
💡 Sample fields:
{
"traceId": "...",
"problem": "...",
"opportunity": "...",
"softwareType": "SaaS",
"features": [...],
"personas": [...],
"successMetrics": [...],
"constraints": [...]
}
🧠 Embedded Memory Format¶
- Each major section is vectorized into semantic memory graphs
- Stored with short-term and long-term retention markers
- Used for: retrieval-augmented prompting, memory injection, context diffing
🧠 Stored in:
vector-store/vision/{projectId}/{traceId}/problem.vectormemory-index/traces/{traceId}
🔄 Regeneration Behavior¶
| Format | When Regenerated |
|---|---|
| Markdown | On every blueprint revision or review |
| JSON | On first generation and validation pass |
| Embedded | When confidence score changes or section updates |
✅ Output Format Summary¶
| Output Type | File Extension | Used By |
|---|---|---|
| Markdown | .md |
Studio, doc builders, reviewers |
| JSON | .json |
Agents, blueprints, planners |
| Embedded | .vector, db |
Memory engines, context retrieval agents |
📏 Validation & Sync¶
- ✅ All three formats must remain schema-aligned
- ✅ Any update to a section triggers re-sync across formats
- 🛠️ Studio UI uses hash comparison for diff view (v1 vs v2, etc.)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Output Formats |
| 🧠 Formats | Markdown (.md), JSON (.json), Embedded Memory (vector-store) |
| 🔁 Used By | Humans, agents, planners, orchestration layers |
| 📊 Stored In | Blob storage, vector DB, trace graph |
📁 File Structure & Naming Conventions¶
🧭 Purpose¶
To support consistency, automation, and discoverability, every Vision Blueprint follows a strict folder layout and naming convention. This structure ensures agents, humans, and pipelines can easily locate, version, and trace artifacts — regardless of the project or template.
In the Factory, folder structure is not cosmetic — it's how orchestration works at scale.
📁 Canonical Blueprint Path¶
| Folder/Path | Purpose |
|---|---|
vision-blueprint.md |
Human-readable markdown version |
vision-blueprint.json |
Structured agent-consumable format |
vision-blueprint.v1.md, vision-blueprint.v2.md |
Historical version snapshots |
trace/{traceId}/... |
Memory-trace specific copies (if needed) |
🧩 Naming Rules¶
| Element | Rule |
|---|---|
project-id |
URL-safe slug (e.g., vetspire-referrals, smart-fleet-tracker) |
version |
Optional suffix (v1, v2, etc.) for blueprint iterations |
traceId |
Global unique identifier (e.g., trace-2025-06-08-vetspire-v1) |
| File extension | .md for human view, .json for agent parse, .vector for memory |
✅ Example Layout¶
blueprints/
vision/
vetspire-referrals/
vision-blueprint.md
vision-blueprint.json
vision-blueprint.v1.md
vision-blueprint.v2.md
trace/
trace-2025-06-08-vetspire-v1/
problem.vector
opportunity.vector
🔄 File Generation Logic¶
| Trigger | File Created/Updated |
|---|---|
| First generation | .md + .json |
| Blueprint correction | .md + .json + new version suffix |
| Confidence re-evaluation | Memory .vector files regenerated |
| Human review | Triggers copy to Studio + approved folder |
🧠 Agent Consumers of File Structure¶
| Agent | File Usage |
|---|---|
| Blueprint Generator Agent | Loads .json to trigger module/service creation |
| QA Agent | Reads .json for test case scaffolding |
| Observability Agent | Emits metric lineage from traceId in folder name |
| Studio UI / Review Agent | Loads .md for human preview |
📦 Versioning Model¶
- Use explicit file-based versioning (
vision-blueprint.v2.md) - Optional: maintain
latestsymlink or alias for Studio usage - Older versions archived but accessible via
trace/{traceId}folders
📏 Validation Rules¶
- ✅ Must include both
.mdand.jsonversions - ✅ Folder path must include a valid
project-id - ✅ Duplicate traceId → triggers version increment
- ✅ No underscores, spaces, or camelCase in file/folder names
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | File Structure & Naming Conventions |
| 📁 Path Structure | blueprints/vision/{project-id}/... |
| 🔄 Format Variants | Markdown, JSON, Vector, Trace |
| 🧠 Used By | Agents, Studio, Observability, Retry Engines |
| 📦 Enables | Project lookup, orchestration triggers, memory mapping, trace diffs |
🔁 Regeneration & Correction Flow¶
🧭 Purpose¶
The Regeneration & Correction Flow defines how the Vision Blueprint evolves over time. This includes mechanisms for:
- 🛠️ Fixing incomplete, vague, or inconsistent blueprints
- 🔄 Regenerating individual sections or the entire document
- 🧠 Updating memory, trace, and downstream dependencies
A blueprint is not static. In the Factory, correction is part of the lifecycle — not a failure, but a feedback loop.
🔄 When Regeneration Happens¶
| Trigger | Type of Regeneration |
|---|---|
| Human review feedback | Sectional or full regeneration |
| Blueprint fails validation heuristics | Auto-correct section (e.g., missing personas) |
| Downstream agent raises inconsistency | Partial regeneration (e.g., invalid feature block) |
| Prompt or business input is updated | Full regeneration with version increment |
| Confidence score < threshold (e.g., 0.75) | Regenerate affected sections |
🔁 Correction Process¶
- ✅ Detect: Validation, diffing, or signal review identifies an issue
- 🧠 Re-invoke Vision Architect Agent with new prompt or context
- 📘 Generate updated section(s) with trace continuity
- 📦 Overwrite
.md,.json,.vectoras needed - 🏷️ Version bump if user-facing content changes
- 📢 Emit correction event to
vision-blueprint-correctedtopic
🧠 Memory Impact¶
- Older memory traces are retained with
supersededBylink - Corrections are embedded and injected as delta overlays
- Trace lineage is never destroyed — always traceable back to original prompt
🔧 Tools & Controls¶
| Mechanism | Description |
|---|---|
section-validation-agent |
Flags vague, missing, or conflicting blueprint sections |
blueprint-editor-skill |
Allows surgical correction of individual fields or blocks |
version-control-agent |
Bumps version, manages tags: draft, validated, approved |
studio-regeneration-tool |
Human-in-the-loop UI for safe regeneration with side-by-side diff |
✅ Regeneration Signals¶
| Field | Purpose |
|---|---|
regeneratedBy |
Agent ID or UI actor |
regenerationType |
manual, auto-validation, prompt-change |
previousTraceId |
For linking deltas across blueprint versions |
🧪 Example Correction Events¶
{
"eventType": "vision-blueprint-corrected",
"projectId": "vetspire-referrals",
"traceId": "trace-2025-06-09-vetspire-v2",
"previousTraceId": "trace-2025-06-08-vetspire-v1",
"correctedSections": ["Problem Definition", "Feature Set"]
}
📏 Validation Rules for Correction¶
- ✅ Corrections must preserve semantic alignment to prior blueprint
- ✅ All updated blocks must pass individual and holistic validation
- ✅ No overwriting of prior versions without archival
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Regeneration & Correction Flow |
| 🔁 Purpose | Enable blueprint evolution without loss of trace or structure |
| 🧠 Tools | Validation agents, blueprint editor, Studio regeneration UI |
| 🔄 Events Emitted | vision-blueprint-corrected, blueprint-version-bumped |
| 🧠 Memory Graph Update | Delta overlays, trace link maintenance, version tagging |
🧠 Memory Graph Representation¶
🧭 Purpose¶
The Memory Graph provides the structural and semantic foundation that allows the AI Software Factory to:
- 🔍 Retrieve vision-aligned context in downstream agents
- 🔄 Navigate across trace-linked blueprints, features, personas, and prompts
- 🧠 Enable long-term memory, reinforcement, and self-correction
Every section in the Vision Blueprint becomes a node in a semantically linked graph — this is how the Factory “remembers.”
🧩 What Is Stored¶
Each major block in the Vision Blueprint (e.g., Problem, Opportunity, Features, Constraints) becomes a typed node:
| Node Type | Stored Data |
|---|---|
vision-problem |
Raw text, embeddings, tags, origin prompt trace |
vision-opportunity |
Outcome framing, aligned personas, delta to status quo |
vision-features[] |
Each feature as independent node with tags |
vision-personas[] |
Persona name, interaction scope, dependencies |
vision-constraint[] |
Rule type, severity, domain alignment |
Each node includes:
- Unique ID
- Linked
traceId,projectId - Block-level embeddings
- Tags, summary, and confidence score
🕸️ Link Types in the Graph¶
| Link | Meaning |
|---|---|
relates_to |
Loosely associated (e.g., problem ↔ opportunity) |
enables |
E.g., feature → outcome metric |
influences |
Constraint → architecture path |
owned_by |
Feature → persona or stakeholder |
depends_on |
Feature → other feature, constraint, or compliance condition |
🧠 Technical Shape¶
-
Stored in:
-
Vector Store (Qdrant, Azure AI Search, Pinecone) Graph Memory Index (custom)-
Blob-linked trace metadata for lineage and regeneration -
Accessed via:
-
MemoryService.GetMemoryContext(traceId) MemoryGraphClient.FindNearestNode(embedding, filters)AgentContext.InferPromptContext()
🧠 Visual Example (Mermaid)¶
graph TD
VBP["📘 Vision Blueprint"]
P["📌 Problem"]
O["💡 Opportunity"]
F1["🧱 Feature: Referral Workflow"]
F2["🧱 Feature: Analytics"]
M["🎯 Metric: Completion Rate"]
C["📎 Constraint: HIPAA"]
A["👤 Persona: Clinic Admin"]
VBP --> P
VBP --> O
VBP --> F1
VBP --> F2
VBP --> M
VBP --> C
VBP --> A
F1 --> A
F2 --> M
C --> F1
O --> M
🔄 Use in Orchestration¶
| Consumer Agent | Behavior |
|---|---|
| Feature Generator Agent | Pulls all feature nodes from graph |
| QA/Test Generator | Fetches metric and constraint nodes to drive test coverage |
| Persona Planner | Retrieves all nodes owned_by specific persona |
| Retry Agent | Follows traceId lineage to rehydrate prior state |
📏 Graph Validation Rules¶
- ✅ All major blueprint blocks must be represented as nodes
- ✅ Every node must link to its originating
traceId - ✅ Features and constraints must have at least one link to personas or outcomes
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Memory Graph Representation |
| 🧠 Format | Structured nodes + vector embeddings + graph topology |
| 🔁 Used By | Retrieval, agent context, regeneration, validation, orchestration |
| 🔗 Stored In | Vector DB + graph memory index + observability trace pipeline |
🧠 Persona-to-Feature Mapping¶
🧭 Purpose¶
This mapping defines which stakeholder personas are responsible for, benefit from, or interact with each of the initial features. It forms a bidirectional bridge between human goals and system capabilities, enabling the Factory to:
- Generate persona-centered user stories
- Align tests and UX to real needs
- Detect orphan features (no persona) or unsupported roles (no features)
Every feature must serve a person. Every persona should have a traceable impact on the system.
🔗 Mapping Structure¶
Each feature is linked to one or more personas, using:
- Interaction roles (e.g.,
owns,uses,views,depends_on) - Impact direction (
input,output,both) - Optional priority or criticality
✅ Example Mapping¶
{
"featureToPersonaMap": {
"Referral Lifecycle Management": [
{ "persona": "Veterinarian", "role": "initiates", "impact": "input" },
{ "persona": "Referral Coordinator", "role": "tracks", "impact": "output" }
],
"Analytics Dashboard": [
{ "persona": "Clinic Admin", "role": "views", "impact": "output" }
],
"Alert Engine": [
{ "persona": "Client", "role": "receives", "impact": "output" },
{ "persona": "Referral Coordinator", "role": "triggers", "impact": "input" }
]
}
}
📘 Markdown Representation¶
## 🧠 Persona-to-Feature Mapping
- **Referral Lifecycle Management**
- 👩⚕️ Veterinarian → initiates
- 🧠 Referral Coordinator → tracks
- **Analytics Dashboard**
- 🧑💼 Clinic Admin → views
- **Alert Engine**
- 🧍 Client → receives
- 🧠 Referral Coordinator → triggers
🧠 Memory Representation¶
| Field | Usage |
|---|---|
featureToPersonaMap |
Core structure: feature → persona list |
personaCoverageScore |
Optional: % of personas covered by mapped features |
featureOwnership |
Used by product and testing agents for coverage validation |
🤖 Agent Consumers¶
| Agent | Purpose |
|---|---|
| Product Owner Agent | Breaks mapped features into persona-specific user stories |
| Test Generator Agent | Builds scenarios from persona-feature flows |
| UX Designer Agent | Tailors views and actions based on persona roles |
| Blueprint Generator Agent | Prioritizes features based on critical persona coverage |
📏 Validation Rules¶
- ✅ Every defined feature must map to at least one persona
- ✅ Every persona should appear in at least one feature (unless explicitly passive)
- ✅ Mappings must define role or usage, not just name references
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Persona-to-Feature Mapping |
| 🔁 Used For | Story generation, UX alignment, test scenarios, scope validation |
| 🧠 Stored In | Markdown + JSON + memory links (feature ↔ persona ↔ role) |
🧠 Feature-to-Metric Alignment¶
🧭 Purpose¶
This mapping connects each initial feature to one or more defined success metrics, providing traceability between:
- 🧱 What the system will do
- 🎯 What it is intended to achieve
It ensures that each capability has business justification, and every goal is measurable through implemented functionality.
🔗 Mapping Logic¶
Each relationship includes:
feature– from the Initial Feature Setmetric– from the Success Metrics blockalignmentType– e.g.,direct,indirect,enables,contributes_toexpectedImpact– qualitative or quantitative target (optional)
✅ Example Mapping¶
{
"featureToMetricMap": {
"Referral Lifecycle Management": [
{
"metric": "Increase referral completion rate",
"alignmentType": "direct"
},
{
"metric": "Shorten referral cycle time",
"alignmentType": "contributes_to"
}
],
"Alert Engine": [
{
"metric": "Achieve 95%+ referral status update rate",
"alignmentType": "enables"
}
],
"Analytics Dashboard": [
{
"metric": "Enable monthly referral outcome reports",
"alignmentType": "direct"
}
]
}
}
📘 Markdown Representation¶
## 🧠 Feature-to-Metric Alignment
- **Referral Lifecycle Management**
- 🎯 Increases referral completion rate
- ⏱️ Contributes to shortening referral cycle time
- **Alert Engine**
- 📬 Enables 95%+ referral status update rate
- **Analytics Dashboard**
- 📊 Enables outcome-based reporting
🧠 Memory Graph Usage¶
| Edge Type | Description |
|---|---|
enables |
Feature → Metric (primary path) |
contributes_to |
Weaker signal; indirect influence |
validates_by |
Metric → Feature (reverse trace) |
These links are used in observability dashboards and release scoring.
🤖 Agent Use¶
| Agent | Purpose |
|---|---|
| QA Agent | Builds test cases that validate metric fulfillment |
| Product Owner Agent | Scores MVP value alignment |
| Observability Agent | Configures dashboards and KPIs around critical features |
| Release Coordinator Agent | Assesses readiness based on metric traces |
📏 Validation Rules¶
- ✅ All critical metrics must map to at least one feature
- ✅ Features without metric alignment → flagged for review
- ✅ Alignment type must be defined (no implicit or vague links)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Feature-to-Metric Alignment |
| 🔁 Used By | QA, Observability, Release, Product validation |
| 🧠 Stored In | JSON + Markdown + memory graph links (feature ↔ metric ↔ impact) |
🧠 Constraint-to-Feature Coverage¶
🧭 Purpose¶
This section ensures that every defined constraint or compliance requirement is explicitly addressed by one or more features, architecture choices, or technical enablers. It provides a traceable, testable link between policy-level rules (e.g., HIPAA, security, platform boundaries) and software system behavior.
If a constraint exists, it must be enforced somewhere in the system — this mapping makes that visible.
🔗 Mapping Structure¶
Each mapping includes:
constraint– from the Constraints & Compliance blockfeature– from Initial Feature SetenforcementType–architectural,runtime,UI-based,integration,externalresponsibleAgent– optional agent expected to ensure coverage
✅ Example Mapping¶
{
"constraintToFeatureMap": {
"HIPAA compliance for patient data": [
{
"feature": "Referral Lifecycle Management",
"enforcementType": "architectural"
},
{
"feature": "Secure External Sharing",
"enforcementType": "integration"
}
],
"Azure-only hosting": [
{
"feature": "System Hosting & Deployment",
"enforcementType": "infrastructure",
"responsibleAgent": "DevOps Agent"
}
]
}
}
📘 Markdown Representation¶
## 🧠 Constraint-to-Feature Coverage
- **HIPAA compliance for patient data**
- 🧱 Referral Lifecycle Management → architectural
- 🔗 Secure External Sharing → integration
- **Azure-only deployment**
- ☁️ System Hosting → infrastructure (DevOps Agent)
🧠 Graph Links¶
| From | To | Edge Type |
|---|---|---|
| Constraint Node | Feature Node | enforced_by |
| Constraint Node | Agent Node | validated_by |
These graph links support test generation, observability hooks, and blueprint validation.
🤖 Agent Consumers¶
| Agent | Role |
|---|---|
| Security Advisor Agent | Validates coverage of encryption, access, compliance rules |
| DevOps Agent | Applies infra/hosting constraints to pipelines |
| QA Agent | Builds compliance test plans per mapping |
| Enterprise Architect Agent | Validates clean separation between compliance boundaries |
📏 Validation Rules¶
- ✅ Each
constraintmust map to at least one feature or architectural element - ✅ If no feature applies, enforcement must be delegated to an
agentor infrastructure - ✅ Unmapped constraints → trigger
complianceGapDetectedwarning
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Constraint-to-Feature Coverage |
| 🧠 Format | JSON + Markdown with enforcement types |
| 🔁 Used By | Architecture, QA, Security, Compliance, DevOps |
| 🔗 Stored In | Memory graph: constraint → enforced_by → feature or agent node |
🧪 Observability & Testability Anchors¶
🧭 Purpose¶
This section establishes early observability hooks and test coverage anchors for the Vision Blueprint. It enables traceable, testable implementation from the start — turning vision into measurable reality across:
- Logging
- Metrics
- Traces
- Tests
- Dashboards
- Alerts
If it can’t be observed, it can’t be trusted. If it can’t be tested, it can’t be shipped.
📡 Observability Anchors¶
For each feature or success metric, define what should be emitted or tracked:
{
"observability": [
{
"target": "Referral Completion Rate",
"type": "metric",
"signal": "referral.completed.percent",
"tags": ["feature:referral", "persona:coordinator"],
"owner": "Observability Agent"
},
{
"target": "Alert Engine",
"type": "log",
"signal": "alert.delivery.success",
"tags": ["feature:alert", "channel:sms"]
}
]
}
📘 Markdown example:
## 🧪 Observability Anchors
- 🎯 Referral Completion Rate
- 📈 Metric: `referral.completed.percent`
- Tags: `feature:referral`, `persona:coordinator`
- 📬 Alert Engine
- 📄 Log: `alert.delivery.success`
- Tags: `feature:alert`, `channel:sms`
🧪 Testability Anchors¶
For each goal, define:
- 📌 Test trigger
- 🎯 Expected outcome
- 🧠 Agent or role responsible
Example:
{
"testability": [
{
"scenario": "Referral created and completed",
"type": "E2E",
"expected": "Referral marked complete and visible in dashboard",
"ownerAgent": "QA Agent"
},
{
"scenario": "HIPAA enforcement",
"type": "Compliance",
"expected": "Data-at-rest encrypted and access audit logged",
"ownerAgent": "Security Advisor Agent"
}
]
}
🧠 Stored As¶
- Indexed in memory with
anchortag - Used by test generators, observability agents, and blueprint validators
- Emitted in
blueprint-anchor-createdevents
🤖 Agent Usage¶
| Agent | Role |
|---|---|
| QA/Test Generator Agent | Uses anchors to scaffold test plans and BDD scenarios |
| Observability Agent | Auto-generates dashboard templates and alerts from signals |
| Release Coordinator | Links anchors to readiness criteria |
📏 Validation Rules¶
- ✅ Every success metric must have at least one observability signal
- ✅ Every critical constraint must map to a testable scenario
- ✅ Anchors must have clear
type(metric, log, trace, test)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Observability & Testability Anchors |
| 📡 Types | Logs, metrics, alerts, traces, test scenarios |
| 🔁 Used By | QA, Observability, DevOps, Release Coordinator |
| 🧠 Stored In | Memory anchors, observability trace index, test coverage maps |
🧱 Vision-Aligned Domain Signals (for DDD & Templates)¶
🧭 Purpose¶
This section extracts core domain concepts, events, and entities from the Vision Blueprint to inform:
- 🧠 DDD-driven modeling (bounded contexts, aggregates, entities)
- 🏗️ Template and microservice generation
- 🧩 Domain vocabulary embedding into all downstream agents
These signals are how the Factory understands the domain, structures the architecture, and aligns language across agents.
🧱 What Is a Domain Signal?¶
| Type | Description |
|---|---|
entity |
Key business object (e.g., Referral, Clinic, Client) |
event |
Domain-relevant occurrence (e.g., ReferralCreated) |
action |
Command or use-case verb (e.g., AcceptReferral) |
context |
Implicit or explicit bounded context (e.g., ReferralFlow) |
valueObject |
Structured descriptor (e.g., ReferralStatus, ClientLocation) |
✅ Example Signals¶
{
"domainSignals": {
"entities": ["Referral", "Veterinarian", "Clinic", "Client"],
"events": ["ReferralCreated", "ReferralCompleted", "AlertSent"],
"actions": ["AcceptReferral", "TrackReferral", "NotifyClient"],
"contexts": ["ReferralManagement", "Analytics"],
"valueObjects": ["ReferralStatus", "CompletionRate"]
}
}
📘 Markdown example:
## 🧱 Domain Signals
- **Entities**: Referral, Clinic, Client
- **Events**: ReferralCreated, ReferralCompleted, AlertSent
- **Actions**: AcceptReferral, TrackReferral, NotifyClient
- **Contexts**: ReferralManagement, Analytics
- **Value Objects**: ReferralStatus, CompletionRate
🧠 Uses Across the Factory¶
| Agent | Usage |
|---|---|
| DDD Model Generator Agent | Extracts aggregates and entities from these terms |
| Microservice Template Agent | Scopes service per domain context |
| Prompt Injection Layer | Uses this vocabulary as domain hints and constraints |
| Test Generator Agent | Uses events for Gherkin-style test scenarios |
🔄 Memory Graph Integration¶
Each domain signal is stored and linked as:
entity_node,event_node, etc.- Linked to originating feature or metric
- Embedded and tagged with source traceId and semantic tags
📏 Validation Rules¶
- ✅ At least 1
entityand 1eventmust be present - ✅ All feature names must resolve to at least one domain signal
- ✅ No orphan signals (i.e., not referenced by any other section)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Vision-Aligned Domain Signals |
| 🔁 Used By | DDD modeling, service templating, prompt grounding, test generation |
| 🧠 Stored In | Domain knowledge graph, embedding vocab, semantic model layer |
📦 Downstream Agent Handoff & Events¶
🧭 Purpose¶
This section defines how the Vision Blueprint transitions from the Vision Architect Agent to downstream agents across the Factory. It ensures clean orchestration, traceable data handover, and event-based triggering for further planning, modeling, and generation.
This is where the vision becomes momentum — triggering the Factory’s generation wave.
🔁 Primary Hand-off Targets¶
| Agent | Purpose of Handoff |
|---|---|
| 👩💼 Product Manager Agent | Breaks features into epics and user stories |
| 🏛️ Enterprise Architect Agent | Identifies DDD contexts and architectural domains |
| 🧱 Blueprint Generator Agent | Generates module/service blueprints from feature blocks |
| 🔎 QA Planning Agent | Seeds test strategies from metrics and constraints |
| ☁️ DevOps & Release Agents | Use compliance + observability anchors to prepare pipelines |
📢 Events Emitted on Completion¶
| Event Name | Description |
|---|---|
vision-blueprint-created |
Main event for orchestration (includes traceId, metadata) |
vision-blueprint-validated |
Optional - emitted if human/agent validation passes |
vision-blueprint-corrected |
If corrections were applied post-validation |
feature-signal-emitted |
Per feature block for streaming to story/model agents |
domain-vocabulary-extracted |
Domain signals sent to DDD modeling agents |
📦 Example: Vision Blueprint Created Event¶
{
"eventType": "vision-blueprint-created",
"traceId": "trace-2025-06-08-vetspire-v1",
"projectId": "vetspire-referrals",
"triggeredBy": "vision-architect",
"outputs": ["vision-blueprint.md", "vision-blueprint.json"],
"handoffTargets": ["product-manager", "enterprise-architect", "qa-planner"]
}
🔄 Event Routing & Observability¶
- All events published to AI Factory Event Mesh (e.g., Azure Event Grid, Kafka)
- Blueprint-level routing topics follow:
topic: blueprint.vision.created.{project-id}
topic: blueprint.corrected.{trace-id}
topic: blueprint.vision.signal.{type}
-
Each event includes:
-
traceId,projectId,originAgent version,timestamp,outputs[]- Optional: confidence score, regeneration flags
🧠 Memory & Trace Registration¶
-
Trace metadata registered in:
-
memory-graph/traces/{traceId} observability-index/blueprints/{traceId}handoff-log/{traceId}for audit and recovery
📏 Validation Rules¶
- ✅ Every vision blueprint must trigger at least
vision-blueprint-created - ✅ All handoff targets must be named agents, not generic functions
- ✅ Events must include complete routing metadata (traceId, version, outputs)
✅ Summary¶
| Attribute | Value |
|---|---|
| 📦 Section Name | Downstream Agent Handoff & Events |
| 📢 Events | vision-blueprint-created, corrected, signal-emitted, handoff |
| 🔁 Used By | Orchestration system, agent routers, Studio UI, retry pipelines |
| 🧠 Stored In | Event mesh, memory trace, observability logs, agent dispatch queue |