π§ Tech Lead Agent Specification¶
The Tech Lead Agent is the technical reviewer and orchestrator for everything that flows through the engineering execution phase. It does not generate code β it audits, aligns, and validates everything generated by engineering agents such as Backend Developers, Code Committers, Generator Agents, Adapter Creators, and Infra Builders.
At scale, where thousands of microservices are autonomously created, the Tech Lead Agent is the AI-native guardian of quality, traceability, and architecture compliance.
π‘οΈ Think of this agent as the AI-powered technical lead on every feature, every PR, every service.
π― Purpose¶
The Tech Lead Agent enforces architectural, domain, security, and quality standards across the engineering lifecycle.
Its primary goals:
- β Ensure all code, adapters, handlers, and APIs adhere to Clean Architecture and DDD boundaries
- β Validate the presence of required tests, trace metadata, ports, and layering contracts
- β Confirm alignment between declared service architecture and actual engineering output
- β Enforce traceability between Vision β Model β Design β Code β Commit
- β Provide pre-merge agent-driven pull request reviews, rejections, or approvals
π§ The Tech Lead Agent is not optional β it's critical to enforce production-grade quality in an autonomous software factory.
π§ Role Summary¶
| Role | Description |
|---|---|
| π οΈ Engineering Enforcer | Reviews all engineering agent outputs for contract and architecture compliance |
| π Semantic Validator | Ensures correctness of adapter usage, DTO mappings, ports, and pipelines |
| π Flow Orchestrator | Coordinates handoff and integration between multiple agents |
| π¦ Traceability Guardian | Injects and verifies trace_id, context_id, and tenant_id metadata |
| π€ Final Reviewer | Emits PR approvals, rejections, or suggestions to the Pull Request Agent |
π§© Why This Agent Exists¶
In traditional teams, a human tech lead must:
- Review dozens of services, features, and PRs
- Understand Clean Architecture, DDD, layering, adapter boundaries, traceability
- Maintain test discipline, cross-team consistency, and security
π‘ In the ConnectSoft Factory, this would require hundreds of human reviewers. Instead, the Tech Lead Agent performs these reviews in seconds, across entire stacks, for every feature.
𧬠Example Responsibilities¶
| Trigger | Tech Lead Agent Action |
|---|---|
| β Use case handler created | Verify it implements correct input port and domain alignment |
| π§ͺ New feature with no test | Reject PR and emit MissingTestCoverageEvent |
| π Event published but not traced | Raise TraceIdMissingViolation |
π Controller lacks [Authorize] |
Inject security policy or escalate to human architect |
| π§© Handler bypasses application model | Reject and regenerate handler via Generator Agent |
π§ High-Level Collaboration Map¶
flowchart LR
VisionAgent --> ArchAgent
ArchAgent --> AppArch
AppArch --> MicroserviceGenerator
MicroserviceGenerator --> BackendDev
BackendDev --> TechLeadAgent
AdapterGen --> TechLeadAgent
CodeCommitter --> TechLeadAgent
TechLeadAgent --> PullRequestAgent
β Positioned at the final checkpoint of engineering β Blocks low-quality or misaligned output β Coordinates approval or rework across agents
π§Ύ Outputs This Agent Emits¶
| Output | Purpose |
|---|---|
validation-report.md |
Markdown document with annotated pass/fail results |
pull-request-review.yaml |
YAML summary of compliance checks for PR integration |
trace-validation-log.json |
Metadata trace across handler, adapter, DTO, and ports |
rejection-event.json |
Explanation of failure and instruction for auto-regeneration |
HumanEscalationSuggested |
Trigger for human tech lead override or intervention |
π Example Flow (Visualized)¶
sequenceDiagram
participant Generator
participant BackendDev
participant TechLead
participant CodeCommitter
participant PRReviewer
Generator->>BackendDev: Scaffold use case
BackendDev->>TechLead: Submit feature for validation
TechLead->>TechLead: Validate ports, adapters, tests, trace
TechLead-->>CodeCommitter: Submit Approved Artifact
TechLead-->>PRReviewer: Annotated Pull Request
β Summary¶
- The Tech Lead Agent is the final validator of engineering output in the software factory
- It enforces:
- Clean Architecture
- Traceability
- Test discipline
- Port-to-adapter integrity
- It collaborates across:
- Generator, Backend, Adapter, Infra, and Committer Agents
- It acts as the AI-native tech governance layer at scale
π Core Responsibilities¶
The Tech Lead Agent is responsible for enforcing technical correctness, architectural integrity, traceability, and code quality across all engineering outputs before integration, commit, or release.
Below is a breakdown of its structured responsibilities, organized by category:
π§ 1. Architecture & Layering Compliance¶
| Responsibility | Description |
|---|---|
| π Validate Clean Architecture layering | Ensure domain logic is not accessed from infrastructure or API directly |
| π Enforce port-driven design | Verify that use case handlers implement declared input/output ports |
| π§© Match adapters to designated layer | Confirm that adapter components are isolated and follow correct dependency direction |
| π€ Ensure DTO and Entity segregation | Disallow direct exposure of domain entities through service models |
π§ͺ 2. Test Discipline & Quality Gates¶
| Responsibility | Description |
|---|---|
| β Enforce test coverage threshold | Require all features to include unit and/or integration tests |
| π Validate test types | Ensure there are domain, adapter, application, and messaging tests |
| π Attach test summary to PR output | Generate test matrix and highlight gaps or missing coverage |
| β Reject untested features | Fail validation for handlers, endpoints, or services with missing tests |
π 3. Traceability Enforcement¶
| Responsibility | Description |
|---|---|
π Validate presence of trace_id, correlation_id, and tenant_id |
Required in all controllers, handlers, adapters, messages |
| π Match service metadata to vision flow | Ensure correct service β use case β entity β vision ID alignment |
| π§ Check trace headers across HTTP, message bus, and actor messages | Confirm propagation of observability context |
| π Link PR and commit to blueprint | Ensure commits link back to declared use case or blueprint item |
π 4. Security & Policy Compliance¶
| Responsibility | Description |
|---|---|
π‘ Check [Authorize] on sensitive endpoints |
Validate role and scope enforcement |
| π Validate authentication injection into use cases | Ensure user identity flows into service layer |
| π Detect missing tenant guards | Prevent cross-tenant data leakage via policy checks |
π 5. Output Integration and Handoff¶
| Responsibility | Description |
|---|---|
| π¦ Emit validation artifacts | Markdown, YAML, and JSON output describing validation result |
| π€ Send results to Pull Request Agent | Approve or reject PRs with attached evidence |
| π Trigger retries with Generator Agent | If handler, adapter, or port is misaligned |
| π Escalate to human reviewer | If blocking error is ambiguous or policy exception needed |
π§ 6. Inter-Agent Orchestration¶
| Actor | Interaction |
|---|---|
| Microservice Generator | Checks that all input/output ports are implemented |
| Adapter Generator | Validates adapter/component wiring integrity |
| Backend Developer Agent | Reviews generated/edited code before approval |
| Code Committer Agent | Provides final merge clearance or rejection |
| Application Architect Agent | Alerts if architectural violations are found |
π§© Responsibility Matrix¶
graph TD
A[Port & Layer Validation] --> TL(Tech Lead)
B[Test Coverage Review] --> TL
C[Security Policy Enforcement] --> TL
D[Trace & Context Enforcement] --> TL
E[Pull Request Gatekeeping] --> TL
F[Failure Recovery Orchestration] --> TL
β Every responsibility is traceable, observable, and connected to a policy rule
π Examples of Responsibilities in Action¶
| Scenario | Action Taken |
|---|---|
| A use case has no test | Tech Lead blocks commit with MissingTestError |
| An adapter references domain entities directly | Tech Lead triggers generator to fix adapter layering |
| Trace metadata is missing from message consumer | Tech Lead injects trace ID and flags observability failure |
| Handler is not linked to declared port | Tech Lead regenerates handler or blocks PR |
β Summary¶
The Tech Lead Agent acts as the governing layer of software engineering, with responsibilities covering:
- Architectural correctness
- Code quality and test enforcement
- Security policy adherence
- Observability compliance
- Agent integration governance
- Developer and committer coordination
π₯ Inputs¶
The Tech Lead Agent operates on a rich set of declarative, semantic, and structured inputs provided by both upstream agents and the ConnectSoft platform.
These inputs allow the agent to validate, enforce, orchestrate, and correct engineering outputs based on architecture contracts, code artifacts, and pipeline metadata.
π§ Input Categories¶
| Category | Description |
|---|---|
| π Design Contracts | Port definitions, service model specs, domain models |
| π§ͺ Test Artifacts | Test result manifests and test coverage metadata |
| π Trace Metadata | Trace ID, correlation ID, tenant ID tags |
| π§± Code and Layer Output | Handlers, controllers, adapters, DTOs |
| π§ Agent Events | Messages or files from other agents |
| π Pull Request Context | Metadata and diff previews from PR creator agent |
π Structured Input Files¶
| File | Source Agent | Description |
|---|---|---|
input-ports.json |
Application Architect Agent | Defines expected input ports and interface names |
output-ports.json |
App Architect Agent | Expected output adapters and handlers |
domain-model.cs |
Domain Modeler Agent | Source of entities, aggregates, value objects |
use-case-map.yaml |
Microservice Generator | Blueprint-to-use-case mapping |
test-matrix.json |
Backend Developer / QA Agent | Coverage by handler, service, and port |
adapters-manifest.json |
Adapter Generator Agent | All generated or planned adapter bindings |
authorization-map.yaml |
Security Policy Agent | Expected auth decorators per route or use case |
trace-metadata.json |
Observability Agent / Runtime output | Traceability records for all service components |
pull-request-diff.md |
PR Creator Agent | Annotated code delta and feature traceability info |
π§ Semantic Metadata from Agents¶
| Source | Metadata Received |
|---|---|
| Generator Agent | FeatureGeneratedEvent with file paths and identifiers |
| Adapter Agent | AdapterReady with interface, input/output, and layer tag |
| Committer Agent | PreCommitReadyToReview trigger |
| QA Agent | TestCoverageStatus including thresholds and skipped tests |
| Bot/Skill Agents | Optional skill-to-use case mappings (if AI flows involved) |
β
All inputs use ConnectSoft agent event schema (.event.json, .map.yaml, or .trace.json formats)
π§ Example Input: input-ports.json¶
[
{
"name": "PlaceOrder",
"interface": "IPlaceOrderHandler",
"expectedType": "PlaceOrderInput",
"module": "ApplicationModel"
}
]
π§ͺ Example Input: test-matrix.json¶
π€ Example Agent Trigger¶
{
"event": "FeatureGeneratedEvent",
"source": "MicroserviceGeneratorAgent",
"artifact": "UseCases/PlaceOrderHandler.cs",
"trace_id": "order-2025-00013",
"blueprint_id": "usecase-3872"
}
β Used to begin validation flow β Connects code artifact to blueprint + trace ID
π Inputs from GitOps / PR Context¶
| Type | Description |
|---|---|
diff-preview.md |
Changes from branch vs. main |
pr-manifest.yaml |
Feature ID, use cases, modules affected |
ci-metadata.json |
Build ID, timestamp, source commit, test run references |
β Used to enrich validation report and generate PR annotations
π§ AI-Aware Inputs¶
- All inputs are machine- and agent-readable
- Semantic identifiers (
blueprint_id,trace_id,port_name,adapter_type) are normalized - Traces are enriched across services (e.g., handler β adapter β DTO β trace)
β Summary¶
The Tech Lead Agent receives inputs from:
- π§± Blueprint-based design documents
- π§ͺ Test coverage metrics and QA artifacts
- π Adapter and generator outputs
- π¦ Git/PR integration layer
- π§ Semantic metadata and trace flows
All of which form the basis for technical validation, traceability enforcement, and PR governance.
π€ Outputs¶
The Tech Lead Agent produces a diverse and structured set of validation outputs, enforcement reports, and collaboration events that drive downstream decisions in:
- β Pull Request validation
- β CI/CD gating
- β Agent orchestration
- β Developer feedback loops
- β Human escalation workflows
Each output is machine-readable, traceable, and linked back to the blueprint, trace ID, and context that triggered it.
π¦ Output Types¶
| Type | Purpose |
|---|---|
| π Reports | Markdown and YAML artifacts for agent, human, and pipeline consumption |
| π§ Event Messages | JSON-based triggers, approvals, or rejection explanations |
| π Trace Metadata | Structured output linking source β port β DTO β handler |
| π§ͺ Validation Artifacts | Explicit pass/fail metadata per rule |
| π₯ Escalation Signals | Human-intervention triggers with embedded reasoning |
| π Regeneration Requests | Signals to Generator or Adapter agents for corrective action |
π Report Outputs¶
β
validation-report.md¶
A structured report with status summaries, component breakdowns, and links to affected files:
# β
Tech Lead Validation Report
## π Affected Component: PlaceOrderHandler.cs
- β
Input port implemented correctly
- β
DTO matches declared type
- β No `[Authorize]` attribute found
- β Missing integration test
## Recommendation:
Block PR and trigger GeneratorAgent retry.
Trace ID: `order-2025-00341`
Blueprint ID: `usecase-3872`
π pull-request-review.yaml¶
Used by the PR Agent to annotate and approve/reject:
status: rejected
reason: security_policy_missing
affected_files:
- Controllers/OrderController.cs
- UseCases/PlaceOrderHandler.cs
actions:
- suggest_regeneration
- escalate_to_human
trace_id: order-2025-00341
π§ Event Outputs¶
| Event | Description |
|---|---|
PullRequestValidated |
Sent to PR Agent on pass |
PullRequestRejectedDueToMissingTrace |
Triggered when trace_id is absent |
TestCoverageViolationDetected |
Indicates test matrix threshold breach |
ArchitectureViolationDetected |
Ports, layering, or DTO misuse |
HumanEscalationSuggested |
Notifies a human reviewer is needed |
CustomGenerationRequired |
Generator Agent instructed to retry component |
EnforcementPassed |
Marks all structural + trace rules satisfied |
β
These events are sent as .event.json and are MCP-compatible
π trace-validation-log.json¶
Captures trace linkage:
{
"trace_id": "order-2025-00341",
"aggregate": "Order",
"handler": "PlaceOrderHandler",
"adapter": "HttpAdapter",
"ports": ["PlaceOrder"],
"dto": "PlaceOrderInput",
"layer": "ApplicationModel",
"status": "complete"
}
β Used by Observability and Generator agents
π Corrective Signals to Agents¶
| Recipient | Trigger |
|---|---|
| Microservice Generator Agent | If handler is invalid or missing |
| Adapter Generator Agent | If adapter references domain layer |
| Backend Developer Agent | If manual edits break validation |
| Code Committer Agent | If PR is unsafe to merge |
| Human Architect | If multi-agent conflict requires manual intervention |
π Validation Result Tags (Included in all outputs)¶
| Tag | Purpose |
|---|---|
status: |
passed, failed, escalate, retry |
trace_id: |
Uniquely links validation to blueprint feature |
reason: |
Short code or phrase (e.g., missing_test, invalid_layer) |
affected_files: |
List of changed files |
actions: |
Recommended next steps (retry, escalate, fix, approve) |
β Summary¶
The Tech Lead Agent produces structured, actionable outputs that:
- β Drive PR approval and CI/CD policy gates
- β Power downstream regeneration or fix workflows
- β Capture traceability and architectural alignment
- β Alert humans when needed β with full context
All outputs are multi-agent compatible, Git-integrated, and Semantically traceable across the ConnectSoft Factory.
π§ Knowledge Base¶
The Tech Lead Agent operates with an extensive preloaded knowledge base, combining ConnectSoft's architectural contracts, platform standards, and multi-agent collaboration rules.
This foundational knowledge enables the agent to autonomously:
- β Validate architectural boundaries
- β Enforce layering and cross-cutting concerns
- β Interpret trace metadata and feature blueprints
- β Cross-link agent artifacts using semantic tags
- β Apply Clean Architecture and DDD design policies
π Core Knowledge Domains¶
| Domain | Description |
|---|---|
| Clean Architecture | Layering rules, allowed dependencies, boundary contracts |
| Domain-Driven Design | Entity/value object structure, aggregate roots, use case patterns |
| Microservice Standards | DTOs, service models, handler conventions, port mappings |
| Testing Strategy | Unit, integration, BDD, architecture test requirements |
| Observability Requirements | Trace ID, tenant ID, correlation ID propagation |
| Security Policy | Role, scope, and tenant guards for service endpoints |
| Traceability Contracts | From blueprint ID β handler β commit β pull request |
π¦ Knowledge Artifacts (Preloaded)¶
| Artifact | Description |
|---|---|
clean-architecture.md |
ConnectSoft layering rules and violations |
domain-driven-design.md |
Aggregate, entity, and event structure documentation |
microservice-template.md |
Standard solution structure and agent expectations |
testing-strategy.md |
Required test coverage and output formats |
traceability-spec.md |
Required tags and headers in messages and handlers |
service-structure.yaml |
Mapping of typical input/output ports and modules |
agent-collaboration-patterns.md |
Handshake patterns and data contracts between engineering agents |
agent-microservice-standard-blueprint.md |
Canonical rules for how blueprints map to code artifacts |
β These documents are part of its immutable system prompt or loaded at runtime from version-controlled knowledge repositories.
π Semantic Knowledge Graph¶
The Tech Lead Agent internally models a graph of:
This allows it to:
- Detect orphaned handlers
- Spot missing ports
- Map commit β trace β blueprint
- Validate PR-to-domain alignment
π§© Static Rules Embedded¶
| Rule ID | Description |
|---|---|
ARC001 |
DomainModel must not be referenced in Adapter layer |
TEST003 |
Each use case handler must include unit + integration test |
SEC005 |
Secure endpoints must have [Authorize] or equivalent guard |
OBS007 |
All outgoing messages must propagate trace_id and correlation_id |
DTO002 |
Domain entities must not be returned directly via service models |
β These rules are checked via traceable rule engine built into the Semantic Kernel planner
π Learning from History¶
The agent may also consume:
| Source | Learning |
|---|---|
previous-validation-reports/ |
Example failures and resolution history |
agent-output-archive/ |
Patterns of regeneration or escalation |
commit-lint.json |
Common anti-patterns across services |
human-review-feedback/ |
Escalation outcomes and acceptance decisions |
These form part of long-term memory to improve retry strategies and reduce false positives.
π§ Ontologies and Schema Understanding¶
The agent supports:
- β JSON Schema for ports, adapters, trace validation
- β YAML config structures for deployment awareness
- β Markdown and inline code extraction for architecture validation
It also understands C# Roslyn AST structures for semantic analysis of code input (via MCP-backed parsing).
β Summary¶
The Tech Lead Agent operates with a rich, structured, and versioned knowledge base that includes:
- Clean Architecture rules
- DDD principles
- Agent interaction contracts
- Observability, test, and trace enforcement logic
All of which form the basis for autonomous, explainable validation and agent orchestration.
π Process Flow¶
The Tech Lead Agent follows a deterministic, multi-step validation and orchestration pipeline. This internal flow allows it to:
- β Receive new engineering artifacts
- β Validate alignment to architecture, test, and trace rules
- β Enforce blocking policies or retry via other agents
- β Collaborate with PR and commit agents for approvals or rejections
𧬠End-to-End Flow Diagram¶
flowchart TD
Start([Receive Artifact Bundle or Agent Trigger])
ParseInputs[[Parse inputs: ports, trace, tests, code]]
ValidateArchitecture[[Enforce layering, DDD, Clean Architecture]]
ValidateTraceability[[Check trace_id, correlation_id, blueprint linkage]]
ValidateTests[[Check for unit, integration, BDD test presence]]
CheckSecurity[[Ensure auth policies and guards]]
ValidateOutput[[Aggregate result into report]]
Decision{Pass or Fail?}
OnPass[[Emit pull-request-review.yaml + validation-report.md]]
OnFail[[Emit rejection event + suggest fix]]
TriggerNext[[Notify PR Agent or Generator Agent]]
Escalate{Escalate to human?}
Start --> ParseInputs
ParseInputs --> ValidateArchitecture
ValidateArchitecture --> ValidateTraceability
ValidateTraceability --> ValidateTests
ValidateTests --> CheckSecurity
CheckSecurity --> ValidateOutput
ValidateOutput --> Decision
Decision -->|β
Pass| OnPass
Decision -->|β Fail| OnFail
OnFail --> Escalate
OnPass --> TriggerNext
Escalate --> TriggerNext
π§ Internal Stages¶
1. Input Parsing¶
- Load
input-ports.json,domain-model.cs,adapters-manifest.json - Normalize metadata from file paths, diff previews, blueprint IDs
- Attach semantic context (e.g., trace_id, tenant_id)
2. Architecture Validation¶
- Apply rules from
clean-architecture.md - Enforce boundaries:
- No adapter β domain references
- No service model β repository coupling
- Match handler to port definition
- Validate use case input/output type signatures
3. Traceability Checks¶
- Ensure presence and propagation of:
trace_idcorrelation_idtenant_id
- Match source code to blueprint trace
- Validate observability decorators for gRPC/REST/messaging endpoints
4. Test Compliance Validation¶
- Load
test-matrix.jsonor recent test report - Ensure unit + integration test exist for each new handler
- Optional: ensure BDD
.featurefile matches use case - Detect untested adapters or DTO-to-entity mappings
5. Security Enforcement¶
- Scan for
[Authorize],Roles, and customIAuthorizationPolicyProviderusage - Check SecurePort flags and tenant-aware guards
- Validate Bot and SK skills donβt bypass auth (if present)
6. Output Aggregation¶
- Compile pass/fail state
- Annotate reasons (rule IDs, file locations, context)
- Build:
validation-report.mdpull-request-review.yaml- Optional
trace-validation-log.json
7. Routing and Orchestration¶
- If passed:
- Send to Pull Request Agent for merge gate
- If failed:
- Generate
rejection-event.json - Optionally send retry signal to Generator or Adapter Agent
- Generate
- If ambiguous:
- Emit
HumanEscalationSuggestedwith context
- Emit
π Retry Loop Path¶
If code can be regenerated:
- Emit
CustomGenerationRequired - Specify which part: handler, adapter, DTO, or test
- Include reference trace ID and rule violations
β This enables a self-healing microservice development loop
π§ Context Propagation¶
During flow, the agent injects and maintains:
| Field | Source |
|---|---|
trace_id |
Blueprint or vision source |
execution_id |
Internal validation instance |
agent_origin |
tech-lead-agent |
decision_scope |
validation, pr-approval, regeneration |
related_agents |
Participating upstream agents (for blame/routing) |
β Summary¶
The Tech Lead Agent executes a precise validation pipeline, enforcing rules across:
- Architecture
- Observability
- Testing
- Security
- Agent traceability
Every validation run is traceable, explainable, and recoverable, with optional self-correction or escalation paths.
π§ Skills and Kernel Functions¶
The Tech Lead Agent is built on Semantic Kernel (SK) and integrates multiple domain-specific skills, validation plugins, and custom planning logic.
These capabilities allow it to analyze, enforce, trace, and correct engineering artifacts β autonomously and at scale.
βοΈ Key Categories of Skills¶
| Category | Purpose |
|---|---|
| π¦ Validation Skills | Run semantic and architectural checks |
| π Traceability Skills | Verify linkages across trace_id, blueprint_id, ports |
| π§ͺ Testing Skills | Ensure test coverage and test types |
| π Security Skills | Validate presence of guards and secure patterns |
| π Corrective Planning | Suggest fixes or auto-trigger regeneration |
| π§ Meta-Coordination | Interact with other agents and orchestrators |
π Kernel Functions¶
The following functions are registered in the agentβs Semantic Kernel configuration:
π¦ validation.*¶
| Function | Description |
|---|---|
validateArchitectureCompliance() |
Parses solution structure and detects layering violations |
validatePortImplementation() |
Matches input/output ports to handlers and adapters |
validateDTOIsolation() |
Ensures DTOs donβt directly expose domain objects |
π trace.*¶
| Function | Description |
|---|---|
verifyTracePropagation() |
Confirms presence of trace_id, correlation_id, tenant_id |
traceBlueprintToHandler() |
Traces path from blueprint_id to handler, test, commit |
checkHandlerTraceMatch() |
Ensures handlerβs trace metadata matches the expected blueprint |
π§ͺ test.*¶
| Function | Description |
|---|---|
checkTestCoverage() |
Validates test types per use case or adapter |
assertTestThreshold(minCoverage) |
Blocks if minimum test coverage not met |
mapTestToHandler() |
Links test files to handler or adapter via naming and trace hints |
π security.*¶
| Function | Description |
|---|---|
verifyAuthorization() |
Checks for [Authorize] or equivalent in secure routes |
ensureRoleGuarding() |
Ensures roles are explicitly declared for each endpoint |
checkTenantIsolation() |
Validates tenant checks are enforced in handlers and adapters |
π correct.*¶
| Function | Description |
|---|---|
suggestHandlerFix() |
Determines regeneration candidate for invalid use case handlers |
recommendAdapterRewrite() |
Suggests Adapter Agent retry for incorrect adapter implementation |
emitCorrectionEvent(target) |
Generates CustomGenerationRequired for another agent |
π€ agent.*¶
| Function | Description |
|---|---|
routeToAgent(agentType) |
Delegates to another agent or planner |
emitEscalationToHuman() |
Triggers escalation event for ambiguous or blocked validations |
logValidationSummary() |
Logs structured metrics and sends event summary to trace store |
π§© Plug-in Integration (Optional)¶
The Tech Lead Agent can load:
- π Git structure plugins (e.g., file classifiers, layer matchers)
- π§ Code parsing (via MCP servers with Roslyn AST or SourceKit)
- π§ͺ Unit test analyzers (for xUnit, MSTest, SpecFlow)
- π Doc parsers (for matching Markdown to YAML or code)
- π Static analyzers (optional rules from Roslyn/NetArchTest)
π§ Sample Skill Chain: Handler Validation¶
plan:
- validatePortImplementation
- validateDTOIsolation
- verifyTracePropagation
- checkTestCoverage
- verifyAuthorization
- emitCorrectionEvent if error
β Can be attached to:
- PR creation
- Service generation
- Agent retry loop
- Manual override triggers
β Summary¶
The Tech Lead Agent is equipped with a rich set of Semantic Kernel functions and ConnectSoft-native skills to:
- Analyze and enforce Clean Architecture
- Trace blueprint β code β tests
- Enforce quality and test discipline
- Trigger agent correction or human review
All functions are modular, pluggable, and observable, ensuring transparent, autonomous governance.
π§° Technology Stack Overview¶
The Tech Lead Agent is a cloud-native, agentic AI validator, built on top of the ConnectSoft platform using a combination of:
- π§ AI orchestration (Semantic Kernel, OpenAI/Azure OpenAI)
- βοΈ Cloud-native infrastructure (Azure Functions, Key Vault, App Config)
- 𧬠Agent interoperability protocols (MCP servers, YAML/JSON events)
- π¦ .NET Core ecosystem (code parsing, test validation, architectural analysis)
Its runtime is optimized for high-frequency validation across thousands of services, with secure, observable, and testable execution.
π§ Core Technologies¶
| Technology | Role |
|---|---|
| Semantic Kernel | Planner, skill registry, function chaining, context orchestration |
| Azure OpenAI | LLM-backed completions, reasoning over structure and history |
| Azure Functions / Containers | Stateless, scalable execution of validation workflows |
| .NET Core (8.0) | Code analysis, test validation, static rules |
| MCP Servers | Parsing, semantic linking, AST generation |
| GitHub / Azure DevOps APIs | PR annotations, commit hooks, trace linking |
| Pulumi (optional) | For provisioning secrets, app configs, or key vault |
| Redis or Azure Cache | Short-term caching of blueprint β trace β PR links |
| Vector DB (e.g., Qdrant, Pinecone) | Long-term context for prior decisions and escalation outcomes |
| Serilog + OpenTelemetry | Logging, metrics, and distributed trace propagation |
𧬠Model Support¶
| Model | Purpose |
|---|---|
gpt-4 / gpt-4o |
Complex decision reasoning (architecture, trace matching) |
gpt-3.5-turbo |
Lightweight planning and skill invocation |
Azure OpenAI |
Enterprise-secure model access (preferred) |
Custom Plugins |
For code parsing, repo summarization, validation enrichment |
Models are selected based on prompt size, skill depth, and urgency level.
π Model Context Protocol (MCP)¶
The agent uses MCP servers for:
| Action | MCP Server |
|---|---|
| C# AST parsing | roslyn-server |
| Code diff analysis | diff-server |
| Event tracing | trace-router-server |
| Diagram generation | diagram-server (for visual validation reports) |
β All MCP responses are structured for SK plugin invocation
π Security and Secrets¶
| Component | Tech |
|---|---|
| Authentication | Managed Identity or Service Principal |
| Secrets | Azure Key Vault or .env via pipeline |
| Role enforcement | App Config or Azure AD scopes |
| Agent signature | JWT with trace_id + hash to identify agent output authenticity |
π Observability Stack¶
| Type | Tool |
|---|---|
| Logs | Serilog to file + console |
| Metrics | Prometheus-exported or Azure Monitor counters |
| Traces | OpenTelemetry spans for each validation pass |
| Dashboards | Grafana or Azure Dashboards (agent lifecycle views) |
β
Every validation step is logged, traced, and mapped to a trace_id
π€ Multi-Agent Mesh Compatibility¶
- All outputs conform to ConnectSoft Agent Event Contract
- Interoperable with:
- Generator Agent
- Adapter Agent
- Application Architect
- Pull Request Agent
- Human Feedback Agent
Format:
.event.json,.trace.yaml,.review.md
β Summary¶
The Tech Lead Agent runs on a modern, cloud-native, AI-augmented platform using:
- Semantic Kernel as its brain
- OpenAI/Azure OpenAI as its reasoning core
- .NET Core + MCP Servers for deterministic rule enforcement
- Azure and GitHub for infrastructure, secret, and deployment automation
- OpenTelemetry for full observability and traceability
π§ System Prompt¶
The system prompt is the foundational instruction that bootstraps the Tech Lead Agent's identity, behavior, and ruleset within the Semantic Kernel.
It encodes:
- The agentβs responsibilities
- The standards it enforces
- The scope of validation
- The tone of communication
- The expected response behavior in success, failure, or escalation
π System Prompt (Full)¶
You are the **Tech Lead Agent** in the ConnectSoft AI Software Factory.
Your responsibility is to act as the **technical validator, reviewer, and orchestrator** across all engineering outputs created by other AI agents or developers.
You do **not** write business logic or generate new features. Instead, you must:
β
Enforce **Clean Architecture** layering and port boundaries
β
Validate **use case handler implementation** against expected ports
β
Enforce **test presence** (unit, integration, optional BDD) for all new features
β
Ensure **traceability** by verifying presence of `trace_id`, `correlation_id`, and `blueprint_id`
β
Apply **security validation** (authorization attributes, tenant isolation, scoped guards)
β
Flag and reject any adapter or handler that violates **domain boundaries**
β
Collaborate with Generator, Adapter, Committer, and QA agents
β
Annotate pull requests, generate structured reports, and emit YAML/JSON results
β
Escalate to a human if rules conflict, inputs are incomplete, or multiple agents produce conflicting results
Never approve a pull request or code bundle unless all architecture, test, trace, and security validations pass.
If validation fails, you must either:
- Suggest regeneration (targeting the responsible agent)
- Emit a structured error (`pull-request-review.yaml`)
- Escalate to a human reviewer
Always sign your output with a `trace_id`, `agent_origin`, and validation summary.
Respond only in structured Markdown, YAML, or event JSON formats.
Be strict, consistent, and traceable.
π§© Key Behaviors Encoded¶
| Behavior | Included in Prompt |
|---|---|
| Enforces test discipline | β |
| Enforces trace metadata | β |
| Validates adapters and ports | β |
| Knows how to reject PRs | β |
| Suggests agent retries | β |
| Escalates intelligently | β |
| Stays non-generative | β (clearly avoids code generation) |
| Emits structured output only | β |
π Safety Guardrails (Encoded)¶
- π Cannot create new features
- π Must escalate on ambiguous inputs
- π¬ Must never hallucinate human instructions or override architecture
π‘ Execution Modes¶
The system prompt adapts to:
| Mode | Trigger |
|---|---|
| Validation Mode | Invoked by Committer or Generator Agent |
| Review Mode | Invoked during PR open/update |
| Escalation Mode | Triggered on architecture conflict or broken trace |
| Correction Mode | Triggered after suggest_regeneration emitted |
π Signature Fields in All Outputs¶
The system prompt enforces these tags in every output:
agent_origin: tech-lead-agent
trace_id: {{UUID or input trace}}
validation_status: passed | failed | escalated
generated_at: {{timestamp}}
β Summary¶
The Tech Lead Agentβs system prompt defines it as a:
- Non-generative, architecture-enforcing, test-aware, trace-enforcing validator
- Operating under strict ConnectSoft Clean Architecture and DDD principles
- Producing structured outputs for automated PR control and agent orchestration
- Escalating responsibly and maintaining auditability
π§Ύ Input Prompt Template¶
The input prompt template defines how the Tech Lead Agent receives and interprets external instructions or context from:
- Other agents (Generator, Adapter, Committer, QA)
- Developers or orchestrators
- CI pipelines or GitOps triggers
It provides the structure and schema for invoking the Tech Lead Agent via Semantic Kernel, MCP servers, or agent skill chaining.
π₯ Structure¶
π Prompt Template Format¶
You are receiving a validation request from the ConnectSoft Software Factory.
## π¦ Submission Metadata
- Blueprint ID: {{blueprint_id}}
- Trace ID: {{trace_id}}
- Service Name: {{service_name}}
- Agent Origin: {{calling_agent}}
## π Artifacts Provided
- Ports File: {{input_ports_path}}
- Adapters Manifest: {{adapter_manifest_path}}
- Domain Model File: {{domain_model_path}}
- Use Case Handler: {{handler_file_path}}
- Test Matrix: {{test_matrix_path}}
- Trace Metadata: {{trace_file_path}}
- Authorization Map: {{auth_map_path}}
## π§ͺ Validation Scope
- Architecture: true
- Traceability: true
- Test Enforcement: true
- Security Checks: true
## π― Objective
Perform a full validation. If all rules pass, emit a `pull-request-review.yaml` with `status: passed`.
If any fail, emit a detailed `validation-report.md` and `pull-request-review.yaml` with `status: failed`.
If required, emit `CustomGenerationRequired` or `HumanEscalationSuggested`.
Respond with:
- Markdown + YAML report (for PR Agent)
- Event JSON output (for Generator or Escalation)
π‘ Dynamic Prompt Injection¶
The input prompt is templated with runtime values injected by the caller:
| Field | Description |
|---|---|
{{blueprint_id}} |
Vision-aligned feature or use case ID |
{{trace_id}} |
Trace linkage for observability |
{{service_name}} |
The microservice under review |
{{calling_agent}} |
Source agent or orchestrator |
{{*_path}} |
Relative file paths in the repo or context bundle |
π Example (Filled)¶
You are receiving a validation request from the ConnectSoft Software Factory.
## π¦ Submission Metadata
- Blueprint ID: usecase-9241
- Trace ID: invoice-2025-0192
- Service Name: BillingService
- Agent Origin: GeneratorAgent
## π Artifacts Provided
- Ports File: ports/input-ports.json
- Adapters Manifest: adapters/adapters-manifest.json
- Domain Model File: domain/domain-model.cs
- Use Case Handler: usecases/CreateInvoiceHandler.cs
- Test Matrix: tests/test-matrix.json
- Trace Metadata: traces/trace-metadata.json
- Authorization Map: config/authorization-map.yaml
## π§ͺ Validation Scope
- Architecture: true
- Traceability: true
- Test Enforcement: true
- Security Checks: true
## π― Objective
Perform a full validation. If all rules pass, emit a `pull-request-review.yaml` with `status: passed`.
If any fail, emit a detailed `validation-report.md` and `pull-request-review.yaml` with `status: failed`.
π Prompt Use Cases¶
| Trigger | Injected Prompt |
|---|---|
| Generator finishes use case | Full template with handler, ports, trace |
| Adapter agent finishes integration | Template scoped to adapter structure and DTO usage |
| Committer preparing PR | Comprehensive validation with full diff bundle |
| Test Agent completes BDD | Includes test-matrix and trace coverage summary |
| Human requests recheck | Includes escalation trace and manual override flag |
π§ Prompt Hints (AI Instruction Modifiers)¶
Respond strictly with YAML and Markdown only.Explain each failed rule with code reference and trace ID.If missing trace or tests, mark as failed.If blueprint is unlinked, escalate.
These hints guide the LLM's tone and deterministic structure compliance.
β Summary¶
The Tech Lead Agent receives highly structured prompts that:
- Include semantic trace, ports, adapters, domain model, tests, and policies
- Instruct it to validate, trace, escalate, or orchestrate regeneration
- Ensure the output is fully machine-readable and CI/CD compliant
π― Output Expectations¶
This section defines the structure, format, and style of outputs produced by the Tech Lead Agent.
All outputs are:
- β Machine-readable (YAML / JSON)
- β Human-auditable (Markdown)
- β CI/CD-compatible (e.g., GitHub, Azure DevOps PR annotations)
- β
Traceable (using
trace_id,agent_origin,blueprint_id) - β
Agent-to-agent interoperable (standard
.event.jsoncontracts)
π Primary Output Types¶
| Output Format | Purpose |
|---|---|
.md (Markdown) |
Annotated validation reports, human-friendly summaries |
.yaml |
PR agent feedback, structured review outcomes |
.json |
Agent events: rejection, escalation, regeneration |
.log / .trace.json |
Trace ID linkage, rule violations, metadata tags |
π Markdown Report β validation-report.md¶
Example Structure:
# β Tech Lead Validation Report β BillingService
**Blueprint ID**: usecase-9241
**Trace ID**: invoice-2025-0192
**Service**: BillingService
**Validation Time**: 2025-05-03T14:22Z
**Agent**: tech-lead-agent
---
## β Failed Checks
- [ ] **Authorization Missing** on `CreateInvoiceHandler.cs`
- [x] Port Implementation: `IHandle<CreateInvoiceInput>` β β
- [ ] Test Coverage: `integration test` missing for handler
- [x] Trace Metadata: `trace_id` found and valid β β
---
## π§ Recommendations
- Add `[Authorize]` to handler or controller
- Ensure handler includes at least 1 integration test
- Retry with updated code or escalate to human
---
π YAML Review Output β pull-request-review.yaml¶
Standard Format:
status: failed
service: BillingService
blueprint_id: usecase-9241
trace_id: invoice-2025-0192
agent_origin: tech-lead-agent
timestamp: 2025-05-03T14:22Z
violations:
- code: SEC005
description: Missing `[Authorize]` on handler
file: UseCases/CreateInvoiceHandler.cs
- code: TEST003
description: Missing integration test for handler
file: Tests/CreateInvoiceTests.cs
actions:
- suggest_regeneration
- escalate_to_human
β This file is consumed by:
- Pull Request Agent
- Committer Agent
- Pipeline step for gated merge
π§ JSON Events β *.event.json¶
Used for inter-agent messaging.
Example: PullRequestRejectedDueToMissingTest.event.json
{
"event": "PullRequestRejectedDueToMissingTest",
"trace_id": "invoice-2025-0192",
"blueprint_id": "usecase-9241",
"service": "BillingService",
"agent_origin": "tech-lead-agent",
"violations": [
{
"rule_id": "TEST003",
"description": "Integration test is missing for CreateInvoiceHandler"
}
],
"timestamp": "2025-05-03T14:22:10Z"
}
β These events trigger:
- Generator retries
- Human escalation
- Notification pipeline entries
π§ͺ Trace Log β trace-validation-log.json¶
Structured trace linkage report:
{
"trace_id": "invoice-2025-0192",
"blueprint_id": "usecase-9241",
"aggregate": "Invoice",
"handler": "CreateInvoiceHandler",
"port": "CreateInvoice",
"adapter": "HttpPost",
"dto": "CreateInvoiceInput",
"tests": ["CreateInvoiceTests.cs"],
"status": "incomplete",
"missing": ["integration_test", "authorize_attribute"]
}
β Used by Observability agents and to create lineage dashboards
π§© Output Signature Tags (Required)¶
All outputs must include:
trace_id: ...
agent_origin: tech-lead-agent
generated_at: {{timestamp}}
status: passed | failed | escalate
blueprint_id: ...
π§ AI Consumption Notes¶
- Markdown is human-reviewed but also SK-readable
- YAML and JSON are passed directly to agents and orchestrators
- Fields follow ConnectSoft schema specs and are version-controlled
β Summary¶
The Tech Lead Agent produces:
- π Human-auditable reports (Markdown)
- β Pipeline-integrated outcomes (YAML)
- π§ Agent messages (JSON)
- π Trace and rule logs (for observability)
All outputs follow ConnectSoft standard contracts and are signed with agent + trace metadata for auditability.
π§ Memory Management¶
The Tech Lead Agent manages both short-term (contextual) and long-term (persistent) memory to:
- π§ Maintain execution state across validation steps
- π Reuse trace and validation history
- π Inform retry decisions and avoid repeated failures
- π§ Enable learning from past escalations, resolutions, and overrides
This dual-memory strategy ensures stateful validation orchestration at scale, even across thousands of simultaneous service workflows.
π§ Memory Types¶
| Memory Type | Scope | Description |
|---|---|---|
| π Short-Term (Contextual) | Runtime (in-session) | Used for reasoning about the current request: ports, traces, adapters, violations |
| π§ Long-Term (Persistent) | Cross-validation, cross-service | Used for remembering past PRs, overrides, common violations, architectural patterns |
π Short-Term Memory (Execution Context)¶
Stored in the Semantic Kernel context during one validation cycle.
Includes:¶
trace_id,blueprint_id,service_name- Loaded port and adapter manifests
- Temporary validation results
- Contextual skill state (e.g., last failed rule)
β Reset after each validation pass β Maintained in memory for multi-phase validation flows
π§ Long-Term Memory (Persistent Stores)¶
Stored in external backends:
| Memory Type | Backend |
|---|---|
| Validation history | Azure Table / NoSQL store |
| Prior trace lineage | Redis / Distributed cache |
| Rule violation patterns | Vector DB (Qdrant / Pinecone) |
| Escalation overrides | JSON blob or policy database |
| Annotated test gaps | SpecFlow DB or test coverage index |
π Memory Schema: memory/validation-history/{service}/{trace_id}.json¶
{
"trace_id": "invoice-2025-0192",
"status": "failed",
"violations": ["TEST003", "SEC005"],
"actions": ["suggest_regeneration"],
"last_attempt": "2025-05-03T14:22Z",
"agent": "tech-lead-agent"
}
β Used to inform whether to retry or escalate β Allows agents to track resolution attempts and blockers
π Memory Use in Retry Flows¶
When re-validating the same service trace:
- Compare current inputs to prior failed attempt
- If same error recurs β escalate instead of retry
- If new changes fix prior violations β approve
β Prevents infinite regeneration loops β Supports AI-based resolution path learning
π Memory Indexing for Reasoning¶
Vector similarity queries may be used for:
- βHave we seen this trace structure before?β
- βWhat test files typically cover this handler?β
- βWas this rule overridden for this tenant or edition in the past?β
Skills like
recommendRegenerationStrategy()use indexed memories to determine fix strategy.
π§ Memory-Aware Prompting¶
The agent injects memory facts into validation context:
[Memory Hint] This handler failed in the last validation run due to missing [Authorize].
If unchanged, escalate to human. If fixed, re-validate only tests.
π¦ Optional Memory Plug-ins¶
| Plug-in | Purpose |
|---|---|
SkillExecutionLogStore |
Track skill-level execution traces |
PRValidationLedger |
Central ledger of pass/fail approvals |
ViolationSummaryIndex |
Most common DDD/architecture violations by domain |
ServiceMemoryStore |
Agent-level key-value store per microservice |
β Summary¶
The Tech Lead Agent uses:
- π Short-term memory for in-session validation reasoning
- π§ Long-term memory for historical tracking, retry safety, and escalation context
- π Memory is multi-source and observable, ensuring accuracy and cross-validation resilience
π― Validation Logic¶
The Tech Lead Agent enforces a set of architectural, functional, security, and quality rules using a layered validation engine. This logic ensures that every microservice is:
- β Aligned with Clean Architecture
- β Properly tested and secured
- β Traceable to its blueprint and execution lineage
- β Ready for integration and deployment
π§ͺ Validation Layers¶
graph TD
Start[Start Validation]
A[Architecture Validation]
B[Traceability Validation]
C[Test Coverage Validation]
D[Security Enforcement]
E[DTO & Adapter Rules]
F[Validation Summary + Decision]
Start --> A --> B --> C --> D --> E --> F
Each layer contributes rules, violation reports, and suggested fix paths.
β Architecture Rules¶
| Rule ID | Description |
|---|---|
ARC001 |
DomainModel must not be referenced in Adapter layer |
ARC002 |
Use cases must implement declared ports (e.g., IHandle<T>) |
ARC003 |
Ports must exist in ApplicationModel, not in InfrastructureModel |
ARC004 |
Adapters must resolve ports via DI, not direct instantiation |
ARC005 |
DTO β Entity separation must be enforced at boundaries |
π Traceability Rules¶
| Rule ID | Description |
|---|---|
TRC001 |
All handlers must carry trace_id, correlation_id, tenant_id |
TRC002 |
Events and messages must forward trace headers |
TRC003 |
Every artifact must link to blueprint_id |
TRC004 |
PR commits must include trace_id reference in message or metadata |
π§ͺ Testing Rules¶
| Rule ID | Description |
|---|---|
TEST001 |
All new use case handlers require unit test coverage |
TEST002 |
All handlers with external side effects must have integration tests |
TEST003 |
Feature-level SpecFlow (BDD) required if the blueprint type = UserStory |
TEST004 |
Handlers missing coverage emit TestCoverageViolationDetected |
TEST005 |
Coverage must exceed minimum threshold (e.g., 80%) if --StrictValidation is true |
π Security Rules¶
| Rule ID | Description |
|---|---|
SEC001 |
All controller routes must have [Authorize] attribute |
SEC002 |
Tenant-aware handlers must validate tenant ownership |
SEC003 |
Any service with IUserContext must verify roles or scopes |
SEC004 |
SignalR or gRPC endpoints must enforce token validation |
SEC005 |
Authorization metadata must match authorization-map.yaml |
π§© DTO and Adapter Rules¶
| Rule ID | Description |
|---|---|
DTO001 |
No domain entity should be serialized in public API response |
DTO002 |
DTO naming convention: VerbNounInput, NounDto, EventPayload |
ADP001 |
Adapters must not inject DbContext or domain repositories directly |
ADP002 |
No usage of domain services inside adapter components |
ADP003 |
Adapter interfaces must conform to port contracts declared in input-ports.json |
π§ Rule Execution Engine¶
The agent applies all rules via:
- Semantic Kernel function chaining (SK planner)
- Custom validators and analyzers (SK skills or .NET analyzers)
- Rule result aggregation into report output
- Auto-suggestion for fix paths (
suggest_regeneration,escalate_to_human)
Each rule returns:
passed,failed, orskippedevidence(file, line, trace ID, reason)rule_id,description,severity
π Result Aggregation Logic¶
All validation results are scored and aggregated into:
validation-report.mdpull-request-review.yaml- Optional:
validation-score.json(with weights, e.g., for CI gates)
Decision Criteria:¶
| Condition | Output |
|---|---|
| No violations | β Approve PR |
| Critical rule(s) fail | β Reject PR |
| Rule conflicts or unknown trace | β οΈ Escalate to human |
| Non-critical violations only | π‘ Approve with warning (optional config) |
π§ Example Output Entry¶
{
"rule_id": "ARC001",
"description": "Domain model referenced in adapter",
"file": "Adapters/OrderAdapter.cs",
"line": 42,
"status": "failed",
"suggestion": "Move domain usage into use case and expose via port"
}
β Summary¶
The Tech Lead Agent uses a layered validation engine to apply:
- Architectural rules
- Trace compliance checks
- Test and coverage enforcement
- Security and access policies
- DTO and adapter boundaries
All rules are versioned, structured, and linked to output artifacts, ensuring full transparency and retry-aware execution.
π― Retry and Correction Flow¶
The Tech Lead Agent is designed to not only detect validation failures β but also to trigger intelligent retries, regeneration, or escalation based on:
- β Severity of the violation
- β Agent-resolvable issues vs. manual fixes
- β Prior validation attempts
- β Memory and trace metadata
This enables the ConnectSoft Factory to self-correct misalignments autonomously, while involving humans only when necessary.
π Retry Decision Flow¶
flowchart TD
Start([Validation Fails])
CheckMemory{{Was this error seen before?}}
CheckAgentFix{{Can another agent fix it?}}
Retry(Trigger agent retry)
Escalate(Escalate to human)
Block(Hard reject)
Pass(Validation passed)
Start --> CheckMemory
CheckMemory -->|First time| CheckAgentFix
CheckMemory -->|Repeated failure| Escalate
CheckAgentFix -->|Yes| Retry
CheckAgentFix -->|No| Escalate
π§ Retry Types¶
| Failure Type | Retry Action |
|---|---|
| π§ͺ Missing test | Trigger TestGeneratorAgent with use case context |
| π Misaligned port | Trigger MicroserviceGeneratorAgent to regenerate handler |
| π Missing auth | Trigger SecurityPolicyAgent to inject policy guard |
| π§± Wrong adapter layer | Notify AdapterGeneratorAgent for restructuring |
| π Missing trace tag | Inject metadata or route to TraceAgent |
| π€·ββοΈ Unclear violation | Escalate to human reviewer |
π¦ Correction Event: CustomGenerationRequired¶
JSON Example:
{
"event": "CustomGenerationRequired",
"agent_origin": "tech-lead-agent",
"target_agent": "MicroserviceGeneratorAgent",
"trace_id": "billing-2025-0142",
"blueprint_id": "usecase-7342",
"fix_type": "regenerate_handler",
"reason": "Missing or incorrect port implementation for CreateInvoiceHandler"
}
β Routed via agent mesh β Planner β Retry agent
π§ͺ Correction Output: suggest_regeneration in YAML¶
YAML snippet:
status: failed
actions:
- suggest_regeneration
- target_agent: AdapterGeneratorAgent
reason: Adapter uses domain logic directly
π§ Retry Limits & Memory Check¶
To avoid infinite loops:
- Validation memory stores retry counts by
trace_id - After 2 attempts for the same violation:
- β Escalate instead of retry
- π Include prior resolution attempts in context
Memory schema example:
{
"trace_id": "billing-2025-0142",
"retry_count": 2,
"last_result": "failed",
"last_agent": "AdapterGeneratorAgent",
"next_action": "escalate"
}
π§© Escalation Event: HumanEscalationSuggested¶
Triggered when:
- Errors are ambiguous or unresolved
- Multiple agents produce conflicting artifacts
- Blueprint context is missing or invalid
- Retry loop has exhausted safe attempts
Escalation includes:
- Full validation report
- Summary of previous retries
- Suggestions for manual correction
π Regeneration Capabilities¶
| Agent | Can Fix |
|---|---|
| MicroserviceGeneratorAgent | Handlers, DTOs, port bindings |
| AdapterGeneratorAgent | Adapter cleanup, port correction |
| SecurityPolicyAgent | [Authorize], Roles, tenant guards |
| TestGeneratorAgent | Unit/integration test generation |
| TraceAgent | Inject trace_id, validate propagation |
| Human Architect | Override rule, bypass gating, explain intent |
π§ Summary Table: Retry or Escalate?¶
| Violation | Retry | Escalate |
|---|---|---|
| β Missing integration test | β | β |
| π§± Adapter breaks layer rule | β | β |
| β Unknown port name | β | β |
| π Repeated failed validation | β | β |
| π§ͺ Invalid trace match | β | If no blueprint found β escalate |
| π Security policy conflict | β | β |
β Summary¶
The Tech Lead Agent includes a smart, memory-driven retry and correction system that:
- Automatically triggers responsible agent corrections
- Limits retries to avoid loops
- Escalates unresolved issues with full traceability
- Makes the ConnectSoft Software Factory resilient and self-healing
π€ Collaboration Interfaces¶
The Tech Lead Agent acts as a central reviewer and coordinator, interfacing with multiple other agents and systems.
It uses event-driven messaging, Semantic Kernel skill chaining, and traceable payload contracts to enable automated orchestration, retries, approvals, and escalations.
π‘ Collaboration Topology¶
graph TD
GeneratorAgent --> TechLead
AdapterGeneratorAgent --> TechLead
BackendDeveloperAgent --> TechLead
CodeCommitterAgent --> TechLead
TestGeneratorAgent --> TechLead
PullRequestAgent --> TechLead
TechLead --> GeneratorAgent
TechLead --> AdapterGeneratorAgent
TechLead --> PullRequestAgent
TechLead --> HumanReviewer
β The Tech Lead Agent receives engineering artifacts β Validates them β Routes actions to other agents or human interfaces
π¬ Supported Message Types¶
| Message/Event | Direction | Purpose |
|---|---|---|
FeatureGeneratedEvent |
β Generator β TL | Trigger validation of generated code |
AdapterReadyEvent |
β Adapter Generator β TL | Check adapter layering and port usage |
PullRequestReadyToValidate |
β Committer β TL | Final review before PR approval |
ValidationReportProduced |
β PR Agent | Review YAML + Markdown to approve/reject |
CustomGenerationRequired |
β Generator/Adapter | Retry with fix strategy |
HumanEscalationSuggested |
β Human Reviewer | Manual override or decision required |
π Inter-Agent Interface Contracts¶
All communication uses ConnectSoft Agent Event Schema, typically in:
*.event.json(inter-agent triggers)*.trace.json(trace reports)*.yaml(PR review)*.md(developer feedback)
π€ Emitted to: Pull Request Agent¶
Purpose: Approve or reject merge
status: passed | failed
trace_id: ...
blueprint_id: ...
violations: [...]
actions: [...]
agent_origin: tech-lead-agent
π Emitted to: Generator or Adapter Agent¶
Purpose: Request regeneration/fix
{
"event": "CustomGenerationRequired",
"target_agent": "AdapterGeneratorAgent",
"reason": "Domain model used in adapter layer",
"trace_id": "order-2025-00341"
}
π Emitted to: Human Reviewer Agent¶
Purpose: Escalate ambiguous or repeated validation failures
{
"event": "HumanEscalationSuggested",
"trace_id": "billing-2025-0924",
"reason": "Adapter and handler disagree on port mapping",
"previous_attempts": 2,
"agent_origin": "tech-lead-agent"
}
π Skill Chaining & Semantic Kernel¶
When invoked within a planner, the Tech Lead Agent exposes skills such as:
validateHandler(handlerPath)checkAuthorization(route)getValidationSummary(trace_id)routeCorrection(targetAgent)recommendHumanReview()
These are callable via SK kernel.InvokeAsync(...) or MCP-backed planners.
π‘ API-Level Integration (Optional)¶
If connected via orchestrator or API gateway, the agent can expose:
POST /validateβ submit bundle for reviewGET /report/{trace_id}β retrieve resultPOST /escalateβ trigger manual override flow
Authentication: JWT with agent identity or pipeline role claim
Headers: X-Trace-ID, X-Blueprint-ID, X-Agent-Origin
π§ Message Routing Strategy¶
| Scenario | Routed To |
|---|---|
| Handler violates port contract | β MicroserviceGeneratorAgent |
| Adapter misuses domain entity | β AdapterGeneratorAgent |
| Missing test file | β TestGeneratorAgent |
| Auth check fails | β SecurityPolicyAgent or escalate |
| Conflicting artifacts | β HumanReviewerAgent |
β Summary¶
The Tech Lead Agent collaborates with:
- Engineering agents (Generator, Adapter, Committer)
- Coordinators (Pull Request Agent, Planner)
- Optional Human Reviewer (when ambiguity exists)
It uses traceable contracts, SK skill chaining, and event-driven messaging to keep the ConnectSoft engineering lifecycle safe, traceable, and autonomous.
π― Observability Hooks¶
To operate at scale and with accountability, the Tech Lead Agent exposes rich observability interfaces, ensuring that:
- π Every validation step is traceable
- π§ Each decision is explainable
- π¦ Failures, retries, and escalations are logged and measurable
- π Telemetry feeds into factory-wide dashboards and audit systems
This aligns with ConnectSoftβs principle: Observability-First by Design.
π Observability Dimensions¶
| Dimension | Details |
|---|---|
| Tracing | OpenTelemetry spans, correlation with trace_id |
| Logging | Structured logs with agent + service context |
| Metrics | Prometheus-style metrics for success/failure, latency |
| Audit Trail | Per-trace report, validation decisions, retry counts |
| Dashboards | Grafana / Azure Monitor / SK Planner UI integration |
π Distributed Tracing¶
| Tracing Tool | Status |
|---|---|
| OpenTelemetry | β Enabled |
| Application Insights | β Optional |
| Jaeger / Zipkin | β Compatible |
traceparent / tracestate headers |
β Injected |
Each validation run includes:
agent_origin: tech-lead-agenttrace_id: {{from blueprint or orchestrator}}span_idfor each major phase (e.g.,validatePorts,checkTests,outputReview)
Sample Trace Flow¶
sequenceDiagram
participant Generator
participant TechLeadAgent
participant AdapterGen
participant PRAgent
Generator->>TechLeadAgent: FeatureGeneratedEvent
activate TechLeadAgent
TechLeadAgent->>TechLeadAgent: validatePort()
TechLeadAgent->>TechLeadAgent: checkAuthorization()
TechLeadAgent->>AdapterGen: emit CustomGenerationRequired
TechLeadAgent->>PRAgent: emit pull-request-review.yaml
deactivate TechLeadAgent
β All steps wrapped in spans with duration and trace context
π Logging Fields¶
| Field | Description |
|---|---|
trace_id |
Connects output to blueprint and service |
agent_origin |
tech-lead-agent |
status |
passed, failed, escalated, retry |
validation_rule |
Rule ID that triggered log |
file, line, violation_code |
If applicable |
duration_ms |
Time per validation phase |
related_agent |
For routing or blame purposes |
Format: JSON log entries via Serilog, output to console + sink (File / Azure)
π Metrics Exported¶
| Metric Name | Description |
|---|---|
techlead_validations_total |
Total validation runs (labels: status, service) |
techlead_rule_violations_total |
Count of rule hits per rule ID |
techlead_validation_duration_seconds |
Histogram of validation time |
techlead_escalations_total |
Number of human escalations |
techlead_retry_loops_total |
Retry cycles triggered |
techlead_handler_failures_total |
Failed handlers by trace group |
β Scraped by Prometheus β Visualized in Grafana or Azure Dashboard tiles
π§Ύ Validation Audit Report Store¶
All reports are stored in:
/audit/validation-reports/{service}/{trace_id}/
βββ validation-report.md
βββ pull-request-review.yaml
βββ trace-validation-log.json
βββ validation-history.json
β Enables postmortem reviews β Supports time-based rollups and trace lineage comparisons
π Dashboard Views (Grafana / Azure)¶
| Panel | Purpose |
|---|---|
| π Retry Rates | Spot loops or frequent agent failures |
| β Top Failing Rules | Focus dev/agent improvements |
| π‘ Escalation Trends | Investigate ambiguous workflows |
| β± Validation Duration | Detect slow paths or bloated flows |
| π Trace Coverage | Ensure every trace_id is fully linked across flows |
π§ Planner Feedback¶
If used within Semantic Kernel orchestration:
- Sends
ValidationSummaryobject back into planner context - Can be consumed by
AI QA Agent,CI Planner, orRelease Safety Checker
{
"trace_id": "...",
"status": "failed",
"failed_rules": ["ARC001", "TEST003"],
"next_action": "suggest_regeneration"
}
β Summary¶
The Tech Lead Agent includes full-spectrum observability hooks:
- β OpenTelemetry tracing
- β Structured logging (Serilog, JSON)
- β Metrics (Prometheus-compatible)
- β Per-service validation audit reports
- β Planner-compatible summaries
Making it an auditable, diagnosable, and safety-critical part of the AI Software Factory.
π Human Intervention Hooks¶
While the Tech Lead Agent is designed for autonomous validation and correction, there are edge cases where human review or override is required.
Human Intervention Hooks provide:
- β Failsafe mechanisms for ambiguous or risky decisions
- π§ Feedback loops for agent improvement
- π Support for policy exceptions, fast-lane approvals, or developer intent clarification
These hooks ensure trust, transparency, and oversight in the AI-governed engineering lifecycle.
π Human Intervention Scenarios¶
| Scenario | Description |
|---|---|
| β Ambiguous rule result | Conflicting signals or incomplete context |
| βοΈ Conflicting agent outputs | Two agents generate incompatible artifacts |
| π§ͺ Repeated validation failure | Same trace_id fails multiple times with no progress |
| π Security decision override | Possible authorization bypass or platform-specific exception |
| π§± Architecture rule exemption | Intentional deviation (e.g., performance, legacy system) |
| π¬ Developer clarification required | Missing annotations, unclear file ownership, unknown adapter |
π Triggered Event: HumanEscalationSuggested¶
Emitted by the Tech Lead Agent:¶
{
"event": "HumanEscalationSuggested",
"trace_id": "billing-2025-0192",
"blueprint_id": "usecase-7342",
"reason": "Repeated failure to meet port-layer separation for handler",
"previous_attempts": 2,
"last_agent_attempted": "AdapterGeneratorAgent",
"violations": ["ARC001", "DTO002"],
"timestamp": "2025-05-03T14:55:03Z"
}
β Routed to:
- HumanReviewerAgent
- Web dashboard (approval queue)
- Slack/Teams via webhook (if configured)
π§βπΌ Human Interfaces Supported¶
| Channel | Action |
|---|---|
| Web UI (Approval Dashboard) | View validation reports, override or send back |
| PR UI (GitHub/Azure DevOps) | PR comment with AI summary, approve manually |
| ChatOps (Slack/Teams) | Button: β Approve, β Reject, π Retry with agent |
| Email Summary | Escalation notification with links to report |
π Override Path¶
If a human overrides the agent:
- Override must include:
- Reason
- Reviewer name or ID
- Scope of exemption (e.g., single handler, full service)
- Time limit (optional)
- Logged to:
manual-overrides/{trace_id}.yaml - Validation report is updated with note:
manual_override:
by: sarah.architect@connectsoft.ai
date: 2025-05-03
reason: Legacy handler β adapter rule exempted
scope: CreateInvoiceHandler.cs only
- Future runs detect override and skip rule(s) accordingly.
π₯ Feedback Loop to Agent Memory¶
Every escalation and human decision is:
- Stored in vector memory / override index
- Used to avoid repeat escalations for same pattern
- Indexed with tags:
approved_with_exception,security_override,architecture_exempt
β Used by planner to adapt future retries and flows
π§ Agent Prompt Injection (Post-Escalation)¶
[Memory] This handler was previously escalated due to ARC001 violation.
A human reviewer approved it with a justification.
Do not revalidate this rule again for this file.
β Summary¶
The Human Intervention Hooks enable:
- Clear escalation triggers
- Traceable override approvals
- Structured exemption recording
- Secure and human-readable interfaces
- Memory-powered adaptive behavior in future validations
The Tech Lead Agent remains autonomous-first but human-aware, creating a balance between AI governance and human engineering oversight.