Skip to content

🧠 Tech Lead Agent Specification

The Tech Lead Agent is the technical reviewer and orchestrator for everything that flows through the engineering execution phase. It does not generate code β€” it audits, aligns, and validates everything generated by engineering agents such as Backend Developers, Code Committers, Generator Agents, Adapter Creators, and Infra Builders.

At scale, where thousands of microservices are autonomously created, the Tech Lead Agent is the AI-native guardian of quality, traceability, and architecture compliance.

πŸ›‘οΈ Think of this agent as the AI-powered technical lead on every feature, every PR, every service.


🎯 Purpose

The Tech Lead Agent enforces architectural, domain, security, and quality standards across the engineering lifecycle.

Its primary goals:

  • βœ… Ensure all code, adapters, handlers, and APIs adhere to Clean Architecture and DDD boundaries
  • βœ… Validate the presence of required tests, trace metadata, ports, and layering contracts
  • βœ… Confirm alignment between declared service architecture and actual engineering output
  • βœ… Enforce traceability between Vision β†’ Model β†’ Design β†’ Code β†’ Commit
  • βœ… Provide pre-merge agent-driven pull request reviews, rejections, or approvals

🧠 The Tech Lead Agent is not optional β€” it's critical to enforce production-grade quality in an autonomous software factory.


🧠 Role Summary

Role Description
πŸ› οΈ Engineering Enforcer Reviews all engineering agent outputs for contract and architecture compliance
πŸ” Semantic Validator Ensures correctness of adapter usage, DTO mappings, ports, and pipelines
πŸ” Flow Orchestrator Coordinates handoff and integration between multiple agents
πŸ“¦ Traceability Guardian Injects and verifies trace_id, context_id, and tenant_id metadata
πŸ“€ Final Reviewer Emits PR approvals, rejections, or suggestions to the Pull Request Agent

🧩 Why This Agent Exists

In traditional teams, a human tech lead must:

  • Review dozens of services, features, and PRs
  • Understand Clean Architecture, DDD, layering, adapter boundaries, traceability
  • Maintain test discipline, cross-team consistency, and security

πŸ’‘ In the ConnectSoft Factory, this would require hundreds of human reviewers. Instead, the Tech Lead Agent performs these reviews in seconds, across entire stacks, for every feature.


🧬 Example Responsibilities

Trigger Tech Lead Agent Action
βœ… Use case handler created Verify it implements correct input port and domain alignment
πŸ§ͺ New feature with no test Reject PR and emit MissingTestCoverageEvent
πŸ“­ Event published but not traced Raise TraceIdMissingViolation
πŸ” Controller lacks [Authorize] Inject security policy or escalate to human architect
🧩 Handler bypasses application model Reject and regenerate handler via Generator Agent

🧠 High-Level Collaboration Map

flowchart LR
    VisionAgent --> ArchAgent
    ArchAgent --> AppArch
    AppArch --> MicroserviceGenerator
    MicroserviceGenerator --> BackendDev
    BackendDev --> TechLeadAgent
    AdapterGen --> TechLeadAgent
    CodeCommitter --> TechLeadAgent
    TechLeadAgent --> PullRequestAgent
Hold "Alt" / "Option" to enable pan & zoom

βœ… Positioned at the final checkpoint of engineering βœ… Blocks low-quality or misaligned output βœ… Coordinates approval or rework across agents


🧾 Outputs This Agent Emits

Output Purpose
validation-report.md Markdown document with annotated pass/fail results
pull-request-review.yaml YAML summary of compliance checks for PR integration
trace-validation-log.json Metadata trace across handler, adapter, DTO, and ports
rejection-event.json Explanation of failure and instruction for auto-regeneration
HumanEscalationSuggested Trigger for human tech lead override or intervention

πŸ” Example Flow (Visualized)

sequenceDiagram
    participant Generator
    participant BackendDev
    participant TechLead
    participant CodeCommitter
    participant PRReviewer

    Generator->>BackendDev: Scaffold use case
    BackendDev->>TechLead: Submit feature for validation
    TechLead->>TechLead: Validate ports, adapters, tests, trace
    TechLead-->>CodeCommitter: Submit Approved Artifact
    TechLead-->>PRReviewer: Annotated Pull Request
Hold "Alt" / "Option" to enable pan & zoom

βœ… Summary

  • The Tech Lead Agent is the final validator of engineering output in the software factory
  • It enforces:
    • Clean Architecture
    • Traceability
    • Test discipline
    • Port-to-adapter integrity
  • It collaborates across:
    • Generator, Backend, Adapter, Infra, and Committer Agents
  • It acts as the AI-native tech governance layer at scale

πŸ“‹ Core Responsibilities

The Tech Lead Agent is responsible for enforcing technical correctness, architectural integrity, traceability, and code quality across all engineering outputs before integration, commit, or release.

Below is a breakdown of its structured responsibilities, organized by category:


πŸ”§ 1. Architecture & Layering Compliance

Responsibility Description
πŸ” Validate Clean Architecture layering Ensure domain logic is not accessed from infrastructure or API directly
πŸ” Enforce port-driven design Verify that use case handlers implement declared input/output ports
🧩 Match adapters to designated layer Confirm that adapter components are isolated and follow correct dependency direction
πŸ“€ Ensure DTO and Entity segregation Disallow direct exposure of domain entities through service models

πŸ§ͺ 2. Test Discipline & Quality Gates

Responsibility Description
βœ… Enforce test coverage threshold Require all features to include unit and/or integration tests
πŸ“Š Validate test types Ensure there are domain, adapter, application, and messaging tests
πŸ“„ Attach test summary to PR output Generate test matrix and highlight gaps or missing coverage
❌ Reject untested features Fail validation for handlers, endpoints, or services with missing tests

πŸ“­ 3. Traceability Enforcement

Responsibility Description
πŸ“Œ Validate presence of trace_id, correlation_id, and tenant_id Required in all controllers, handlers, adapters, messages
πŸ” Match service metadata to vision flow Ensure correct service ↔ use case ↔ entity ↔ vision ID alignment
🧠 Check trace headers across HTTP, message bus, and actor messages Confirm propagation of observability context
πŸ“‚ Link PR and commit to blueprint Ensure commits link back to declared use case or blueprint item

πŸ” 4. Security & Policy Compliance

Responsibility Description
πŸ›‘ Check [Authorize] on sensitive endpoints Validate role and scope enforcement
πŸ— Validate authentication injection into use cases Ensure user identity flows into service layer
πŸ“› Detect missing tenant guards Prevent cross-tenant data leakage via policy checks

πŸ” 5. Output Integration and Handoff

Responsibility Description
πŸ“¦ Emit validation artifacts Markdown, YAML, and JSON output describing validation result
🀝 Send results to Pull Request Agent Approve or reject PRs with attached evidence
πŸ” Trigger retries with Generator Agent If handler, adapter, or port is misaligned
πŸ™‹ Escalate to human reviewer If blocking error is ambiguous or policy exception needed

🧠 6. Inter-Agent Orchestration

Actor Interaction
Microservice Generator Checks that all input/output ports are implemented
Adapter Generator Validates adapter/component wiring integrity
Backend Developer Agent Reviews generated/edited code before approval
Code Committer Agent Provides final merge clearance or rejection
Application Architect Agent Alerts if architectural violations are found

🧩 Responsibility Matrix

graph TD
  A[Port & Layer Validation] --> TL(Tech Lead)
  B[Test Coverage Review] --> TL
  C[Security Policy Enforcement] --> TL
  D[Trace & Context Enforcement] --> TL
  E[Pull Request Gatekeeping] --> TL
  F[Failure Recovery Orchestration] --> TL
Hold "Alt" / "Option" to enable pan & zoom

βœ… Every responsibility is traceable, observable, and connected to a policy rule


πŸ” Examples of Responsibilities in Action

Scenario Action Taken
A use case has no test Tech Lead blocks commit with MissingTestError
An adapter references domain entities directly Tech Lead triggers generator to fix adapter layering
Trace metadata is missing from message consumer Tech Lead injects trace ID and flags observability failure
Handler is not linked to declared port Tech Lead regenerates handler or blocks PR

βœ… Summary

The Tech Lead Agent acts as the governing layer of software engineering, with responsibilities covering:

  • Architectural correctness
  • Code quality and test enforcement
  • Security policy adherence
  • Observability compliance
  • Agent integration governance
  • Developer and committer coordination

πŸ“₯ Inputs

The Tech Lead Agent operates on a rich set of declarative, semantic, and structured inputs provided by both upstream agents and the ConnectSoft platform.

These inputs allow the agent to validate, enforce, orchestrate, and correct engineering outputs based on architecture contracts, code artifacts, and pipeline metadata.


πŸ”§ Input Categories

Category Description
πŸ“„ Design Contracts Port definitions, service model specs, domain models
πŸ§ͺ Test Artifacts Test result manifests and test coverage metadata
πŸ”€ Trace Metadata Trace ID, correlation ID, tenant ID tags
🧱 Code and Layer Output Handlers, controllers, adapters, DTOs
🧠 Agent Events Messages or files from other agents
πŸ”Ž Pull Request Context Metadata and diff previews from PR creator agent

πŸ“‚ Structured Input Files

File Source Agent Description
input-ports.json Application Architect Agent Defines expected input ports and interface names
output-ports.json App Architect Agent Expected output adapters and handlers
domain-model.cs Domain Modeler Agent Source of entities, aggregates, value objects
use-case-map.yaml Microservice Generator Blueprint-to-use-case mapping
test-matrix.json Backend Developer / QA Agent Coverage by handler, service, and port
adapters-manifest.json Adapter Generator Agent All generated or planned adapter bindings
authorization-map.yaml Security Policy Agent Expected auth decorators per route or use case
trace-metadata.json Observability Agent / Runtime output Traceability records for all service components
pull-request-diff.md PR Creator Agent Annotated code delta and feature traceability info

🧠 Semantic Metadata from Agents

Source Metadata Received
Generator Agent FeatureGeneratedEvent with file paths and identifiers
Adapter Agent AdapterReady with interface, input/output, and layer tag
Committer Agent PreCommitReadyToReview trigger
QA Agent TestCoverageStatus including thresholds and skipped tests
Bot/Skill Agents Optional skill-to-use case mappings (if AI flows involved)

βœ… All inputs use ConnectSoft agent event schema (.event.json, .map.yaml, or .trace.json formats)


🧭 Example Input: input-ports.json

[
  {
    "name": "PlaceOrder",
    "interface": "IPlaceOrderHandler",
    "expectedType": "PlaceOrderInput",
    "module": "ApplicationModel"
  }
]

πŸ§ͺ Example Input: test-matrix.json

{
  "PlaceOrderHandler": {
    "unit": true,
    "integration": true,
    "bdd": false,
    "coverage": 0.78
  }
}

πŸ“€ Example Agent Trigger

{
  "event": "FeatureGeneratedEvent",
  "source": "MicroserviceGeneratorAgent",
  "artifact": "UseCases/PlaceOrderHandler.cs",
  "trace_id": "order-2025-00013",
  "blueprint_id": "usecase-3872"
}

βœ… Used to begin validation flow βœ… Connects code artifact to blueprint + trace ID


πŸ”Ž Inputs from GitOps / PR Context

Type Description
diff-preview.md Changes from branch vs. main
pr-manifest.yaml Feature ID, use cases, modules affected
ci-metadata.json Build ID, timestamp, source commit, test run references

βœ… Used to enrich validation report and generate PR annotations


🧠 AI-Aware Inputs

  • All inputs are machine- and agent-readable
  • Semantic identifiers (blueprint_id, trace_id, port_name, adapter_type) are normalized
  • Traces are enriched across services (e.g., handler β†’ adapter β†’ DTO β†’ trace)

βœ… Summary

The Tech Lead Agent receives inputs from:

  • 🧱 Blueprint-based design documents
  • πŸ§ͺ Test coverage metrics and QA artifacts
  • πŸ”Œ Adapter and generator outputs
  • πŸ“¦ Git/PR integration layer
  • 🧠 Semantic metadata and trace flows

All of which form the basis for technical validation, traceability enforcement, and PR governance.


πŸ“€ Outputs

The Tech Lead Agent produces a diverse and structured set of validation outputs, enforcement reports, and collaboration events that drive downstream decisions in:

  • βœ… Pull Request validation
  • βœ… CI/CD gating
  • βœ… Agent orchestration
  • βœ… Developer feedback loops
  • βœ… Human escalation workflows

Each output is machine-readable, traceable, and linked back to the blueprint, trace ID, and context that triggered it.


πŸ“¦ Output Types

Type Purpose
πŸ“„ Reports Markdown and YAML artifacts for agent, human, and pipeline consumption
🧠 Event Messages JSON-based triggers, approvals, or rejection explanations
πŸ“Ž Trace Metadata Structured output linking source ➝ port ➝ DTO ➝ handler
πŸ§ͺ Validation Artifacts Explicit pass/fail metadata per rule
πŸ‘₯ Escalation Signals Human-intervention triggers with embedded reasoning
πŸ” Regeneration Requests Signals to Generator or Adapter agents for corrective action

πŸ“„ Report Outputs

βœ… validation-report.md

A structured report with status summaries, component breakdowns, and links to affected files:

# βœ… Tech Lead Validation Report

## πŸ“‚ Affected Component: PlaceOrderHandler.cs
- βœ… Input port implemented correctly
- βœ… DTO matches declared type
- ❌ No `[Authorize]` attribute found
- ❌ Missing integration test

## Recommendation:
Block PR and trigger GeneratorAgent retry.

Trace ID: `order-2025-00341`
Blueprint ID: `usecase-3872`

πŸ“‘ pull-request-review.yaml

Used by the PR Agent to annotate and approve/reject:

status: rejected
reason: security_policy_missing
affected_files:
  - Controllers/OrderController.cs
  - UseCases/PlaceOrderHandler.cs
actions:
  - suggest_regeneration
  - escalate_to_human
trace_id: order-2025-00341

🧠 Event Outputs

Event Description
PullRequestValidated Sent to PR Agent on pass
PullRequestRejectedDueToMissingTrace Triggered when trace_id is absent
TestCoverageViolationDetected Indicates test matrix threshold breach
ArchitectureViolationDetected Ports, layering, or DTO misuse
HumanEscalationSuggested Notifies a human reviewer is needed
CustomGenerationRequired Generator Agent instructed to retry component
EnforcementPassed Marks all structural + trace rules satisfied

βœ… These events are sent as .event.json and are MCP-compatible


πŸ”„ trace-validation-log.json

Captures trace linkage:

{
  "trace_id": "order-2025-00341",
  "aggregate": "Order",
  "handler": "PlaceOrderHandler",
  "adapter": "HttpAdapter",
  "ports": ["PlaceOrder"],
  "dto": "PlaceOrderInput",
  "layer": "ApplicationModel",
  "status": "complete"
}

βœ… Used by Observability and Generator agents


πŸ” Corrective Signals to Agents

Recipient Trigger
Microservice Generator Agent If handler is invalid or missing
Adapter Generator Agent If adapter references domain layer
Backend Developer Agent If manual edits break validation
Code Committer Agent If PR is unsafe to merge
Human Architect If multi-agent conflict requires manual intervention

πŸ“Š Validation Result Tags (Included in all outputs)

Tag Purpose
status: passed, failed, escalate, retry
trace_id: Uniquely links validation to blueprint feature
reason: Short code or phrase (e.g., missing_test, invalid_layer)
affected_files: List of changed files
actions: Recommended next steps (retry, escalate, fix, approve)

βœ… Summary

The Tech Lead Agent produces structured, actionable outputs that:

  • βœ… Drive PR approval and CI/CD policy gates
  • βœ… Power downstream regeneration or fix workflows
  • βœ… Capture traceability and architectural alignment
  • βœ… Alert humans when needed β€” with full context

All outputs are multi-agent compatible, Git-integrated, and Semantically traceable across the ConnectSoft Factory.


🧠 Knowledge Base

The Tech Lead Agent operates with an extensive preloaded knowledge base, combining ConnectSoft's architectural contracts, platform standards, and multi-agent collaboration rules.

This foundational knowledge enables the agent to autonomously:

  • βœ… Validate architectural boundaries
  • βœ… Enforce layering and cross-cutting concerns
  • βœ… Interpret trace metadata and feature blueprints
  • βœ… Cross-link agent artifacts using semantic tags
  • βœ… Apply Clean Architecture and DDD design policies

πŸ“š Core Knowledge Domains

Domain Description
Clean Architecture Layering rules, allowed dependencies, boundary contracts
Domain-Driven Design Entity/value object structure, aggregate roots, use case patterns
Microservice Standards DTOs, service models, handler conventions, port mappings
Testing Strategy Unit, integration, BDD, architecture test requirements
Observability Requirements Trace ID, tenant ID, correlation ID propagation
Security Policy Role, scope, and tenant guards for service endpoints
Traceability Contracts From blueprint ID ➝ handler ➝ commit ➝ pull request

πŸ“¦ Knowledge Artifacts (Preloaded)

Artifact Description
clean-architecture.md ConnectSoft layering rules and violations
domain-driven-design.md Aggregate, entity, and event structure documentation
microservice-template.md Standard solution structure and agent expectations
testing-strategy.md Required test coverage and output formats
traceability-spec.md Required tags and headers in messages and handlers
service-structure.yaml Mapping of typical input/output ports and modules
agent-collaboration-patterns.md Handshake patterns and data contracts between engineering agents
agent-microservice-standard-blueprint.md Canonical rules for how blueprints map to code artifacts

βœ… These documents are part of its immutable system prompt or loaded at runtime from version-controlled knowledge repositories.


πŸ” Semantic Knowledge Graph

The Tech Lead Agent internally models a graph of:

Blueprint β†’ Use Case β†’ Port β†’ Handler β†’ Test β†’ Trace β†’ Commit

This allows it to:

  • Detect orphaned handlers
  • Spot missing ports
  • Map commit β†’ trace β†’ blueprint
  • Validate PR-to-domain alignment

🧩 Static Rules Embedded

Rule ID Description
ARC001 DomainModel must not be referenced in Adapter layer
TEST003 Each use case handler must include unit + integration test
SEC005 Secure endpoints must have [Authorize] or equivalent guard
OBS007 All outgoing messages must propagate trace_id and correlation_id
DTO002 Domain entities must not be returned directly via service models

βœ… These rules are checked via traceable rule engine built into the Semantic Kernel planner


πŸ“š Learning from History

The agent may also consume:

Source Learning
previous-validation-reports/ Example failures and resolution history
agent-output-archive/ Patterns of regeneration or escalation
commit-lint.json Common anti-patterns across services
human-review-feedback/ Escalation outcomes and acceptance decisions

These form part of long-term memory to improve retry strategies and reduce false positives.


🧠 Ontologies and Schema Understanding

The agent supports:

  • βœ… JSON Schema for ports, adapters, trace validation
  • βœ… YAML config structures for deployment awareness
  • βœ… Markdown and inline code extraction for architecture validation

It also understands C# Roslyn AST structures for semantic analysis of code input (via MCP-backed parsing).


βœ… Summary

The Tech Lead Agent operates with a rich, structured, and versioned knowledge base that includes:

  • Clean Architecture rules
  • DDD principles
  • Agent interaction contracts
  • Observability, test, and trace enforcement logic

All of which form the basis for autonomous, explainable validation and agent orchestration.


πŸ”„ Process Flow

The Tech Lead Agent follows a deterministic, multi-step validation and orchestration pipeline. This internal flow allows it to:

  • βœ… Receive new engineering artifacts
  • βœ… Validate alignment to architecture, test, and trace rules
  • βœ… Enforce blocking policies or retry via other agents
  • βœ… Collaborate with PR and commit agents for approvals or rejections

🧬 End-to-End Flow Diagram

flowchart TD
  Start([Receive Artifact Bundle or Agent Trigger])
  ParseInputs[[Parse inputs: ports, trace, tests, code]]
  ValidateArchitecture[[Enforce layering, DDD, Clean Architecture]]
  ValidateTraceability[[Check trace_id, correlation_id, blueprint linkage]]
  ValidateTests[[Check for unit, integration, BDD test presence]]
  CheckSecurity[[Ensure auth policies and guards]]
  ValidateOutput[[Aggregate result into report]]
  Decision{Pass or Fail?}
  OnPass[[Emit pull-request-review.yaml + validation-report.md]]
  OnFail[[Emit rejection event + suggest fix]]
  TriggerNext[[Notify PR Agent or Generator Agent]]
  Escalate{Escalate to human?}

  Start --> ParseInputs
  ParseInputs --> ValidateArchitecture
  ValidateArchitecture --> ValidateTraceability
  ValidateTraceability --> ValidateTests
  ValidateTests --> CheckSecurity
  CheckSecurity --> ValidateOutput
  ValidateOutput --> Decision
  Decision -->|βœ… Pass| OnPass
  Decision -->|❌ Fail| OnFail
  OnFail --> Escalate
  OnPass --> TriggerNext
  Escalate --> TriggerNext
Hold "Alt" / "Option" to enable pan & zoom

🧠 Internal Stages

1. Input Parsing

  • Load input-ports.json, domain-model.cs, adapters-manifest.json
  • Normalize metadata from file paths, diff previews, blueprint IDs
  • Attach semantic context (e.g., trace_id, tenant_id)

2. Architecture Validation

  • Apply rules from clean-architecture.md
  • Enforce boundaries:
    • No adapter β†’ domain references
    • No service model β†’ repository coupling
  • Match handler to port definition
  • Validate use case input/output type signatures

3. Traceability Checks

  • Ensure presence and propagation of:
    • trace_id
    • correlation_id
    • tenant_id
  • Match source code to blueprint trace
  • Validate observability decorators for gRPC/REST/messaging endpoints

4. Test Compliance Validation

  • Load test-matrix.json or recent test report
  • Ensure unit + integration test exist for each new handler
  • Optional: ensure BDD .feature file matches use case
  • Detect untested adapters or DTO-to-entity mappings

5. Security Enforcement

  • Scan for [Authorize], Roles, and custom IAuthorizationPolicyProvider usage
  • Check SecurePort flags and tenant-aware guards
  • Validate Bot and SK skills don’t bypass auth (if present)

6. Output Aggregation

  • Compile pass/fail state
  • Annotate reasons (rule IDs, file locations, context)
  • Build:
    • validation-report.md
    • pull-request-review.yaml
    • Optional trace-validation-log.json

7. Routing and Orchestration

  • If passed:
    • Send to Pull Request Agent for merge gate
  • If failed:
    • Generate rejection-event.json
    • Optionally send retry signal to Generator or Adapter Agent
  • If ambiguous:
    • Emit HumanEscalationSuggested with context

πŸ” Retry Loop Path

If code can be regenerated:

  • Emit CustomGenerationRequired
  • Specify which part: handler, adapter, DTO, or test
  • Include reference trace ID and rule violations

βœ… This enables a self-healing microservice development loop


🧠 Context Propagation

During flow, the agent injects and maintains:

Field Source
trace_id Blueprint or vision source
execution_id Internal validation instance
agent_origin tech-lead-agent
decision_scope validation, pr-approval, regeneration
related_agents Participating upstream agents (for blame/routing)

βœ… Summary

The Tech Lead Agent executes a precise validation pipeline, enforcing rules across:

  • Architecture
  • Observability
  • Testing
  • Security
  • Agent traceability

Every validation run is traceable, explainable, and recoverable, with optional self-correction or escalation paths.


🧠 Skills and Kernel Functions

The Tech Lead Agent is built on Semantic Kernel (SK) and integrates multiple domain-specific skills, validation plugins, and custom planning logic.

These capabilities allow it to analyze, enforce, trace, and correct engineering artifacts β€” autonomously and at scale.


βš™οΈ Key Categories of Skills

Category Purpose
πŸ“¦ Validation Skills Run semantic and architectural checks
πŸ“­ Traceability Skills Verify linkages across trace_id, blueprint_id, ports
πŸ§ͺ Testing Skills Ensure test coverage and test types
πŸ” Security Skills Validate presence of guards and secure patterns
πŸ” Corrective Planning Suggest fixes or auto-trigger regeneration
🧠 Meta-Coordination Interact with other agents and orchestrators

πŸ“š Kernel Functions

The following functions are registered in the agent’s Semantic Kernel configuration:

πŸ“¦ validation.*

Function Description
validateArchitectureCompliance() Parses solution structure and detects layering violations
validatePortImplementation() Matches input/output ports to handlers and adapters
validateDTOIsolation() Ensures DTOs don’t directly expose domain objects

πŸ“­ trace.*

Function Description
verifyTracePropagation() Confirms presence of trace_id, correlation_id, tenant_id
traceBlueprintToHandler() Traces path from blueprint_id to handler, test, commit
checkHandlerTraceMatch() Ensures handler’s trace metadata matches the expected blueprint

πŸ§ͺ test.*

Function Description
checkTestCoverage() Validates test types per use case or adapter
assertTestThreshold(minCoverage) Blocks if minimum test coverage not met
mapTestToHandler() Links test files to handler or adapter via naming and trace hints

πŸ” security.*

Function Description
verifyAuthorization() Checks for [Authorize] or equivalent in secure routes
ensureRoleGuarding() Ensures roles are explicitly declared for each endpoint
checkTenantIsolation() Validates tenant checks are enforced in handlers and adapters

πŸ” correct.*

Function Description
suggestHandlerFix() Determines regeneration candidate for invalid use case handlers
recommendAdapterRewrite() Suggests Adapter Agent retry for incorrect adapter implementation
emitCorrectionEvent(target) Generates CustomGenerationRequired for another agent

πŸ€– agent.*

Function Description
routeToAgent(agentType) Delegates to another agent or planner
emitEscalationToHuman() Triggers escalation event for ambiguous or blocked validations
logValidationSummary() Logs structured metrics and sends event summary to trace store

🧩 Plug-in Integration (Optional)

The Tech Lead Agent can load:

  • πŸ“ Git structure plugins (e.g., file classifiers, layer matchers)
  • 🧠 Code parsing (via MCP servers with Roslyn AST or SourceKit)
  • πŸ§ͺ Unit test analyzers (for xUnit, MSTest, SpecFlow)
  • πŸ“„ Doc parsers (for matching Markdown to YAML or code)
  • πŸ” Static analyzers (optional rules from Roslyn/NetArchTest)

🧠 Sample Skill Chain: Handler Validation

plan:
  - validatePortImplementation
  - validateDTOIsolation
  - verifyTracePropagation
  - checkTestCoverage
  - verifyAuthorization
  - emitCorrectionEvent if error

βœ… Can be attached to:

  • PR creation
  • Service generation
  • Agent retry loop
  • Manual override triggers

βœ… Summary

The Tech Lead Agent is equipped with a rich set of Semantic Kernel functions and ConnectSoft-native skills to:

  • Analyze and enforce Clean Architecture
  • Trace blueprint ➝ code ➝ tests
  • Enforce quality and test discipline
  • Trigger agent correction or human review

All functions are modular, pluggable, and observable, ensuring transparent, autonomous governance.


🧰 Technology Stack Overview

The Tech Lead Agent is a cloud-native, agentic AI validator, built on top of the ConnectSoft platform using a combination of:

  • 🧠 AI orchestration (Semantic Kernel, OpenAI/Azure OpenAI)
  • ☁️ Cloud-native infrastructure (Azure Functions, Key Vault, App Config)
  • 🧬 Agent interoperability protocols (MCP servers, YAML/JSON events)
  • πŸ“¦ .NET Core ecosystem (code parsing, test validation, architectural analysis)

Its runtime is optimized for high-frequency validation across thousands of services, with secure, observable, and testable execution.


πŸ”§ Core Technologies

Technology Role
Semantic Kernel Planner, skill registry, function chaining, context orchestration
Azure OpenAI LLM-backed completions, reasoning over structure and history
Azure Functions / Containers Stateless, scalable execution of validation workflows
.NET Core (8.0) Code analysis, test validation, static rules
MCP Servers Parsing, semantic linking, AST generation
GitHub / Azure DevOps APIs PR annotations, commit hooks, trace linking
Pulumi (optional) For provisioning secrets, app configs, or key vault
Redis or Azure Cache Short-term caching of blueprint ➝ trace ➝ PR links
Vector DB (e.g., Qdrant, Pinecone) Long-term context for prior decisions and escalation outcomes
Serilog + OpenTelemetry Logging, metrics, and distributed trace propagation

🧬 Model Support

Model Purpose
gpt-4 / gpt-4o Complex decision reasoning (architecture, trace matching)
gpt-3.5-turbo Lightweight planning and skill invocation
Azure OpenAI Enterprise-secure model access (preferred)
Custom Plugins For code parsing, repo summarization, validation enrichment

Models are selected based on prompt size, skill depth, and urgency level.


🌐 Model Context Protocol (MCP)

The agent uses MCP servers for:

Action MCP Server
C# AST parsing roslyn-server
Code diff analysis diff-server
Event tracing trace-router-server
Diagram generation diagram-server (for visual validation reports)

βœ… All MCP responses are structured for SK plugin invocation


πŸ” Security and Secrets

Component Tech
Authentication Managed Identity or Service Principal
Secrets Azure Key Vault or .env via pipeline
Role enforcement App Config or Azure AD scopes
Agent signature JWT with trace_id + hash to identify agent output authenticity

πŸ“Š Observability Stack

Type Tool
Logs Serilog to file + console
Metrics Prometheus-exported or Azure Monitor counters
Traces OpenTelemetry spans for each validation pass
Dashboards Grafana or Azure Dashboards (agent lifecycle views)

βœ… Every validation step is logged, traced, and mapped to a trace_id


πŸ€– Multi-Agent Mesh Compatibility

  • All outputs conform to ConnectSoft Agent Event Contract
  • Interoperable with:
    • Generator Agent
    • Adapter Agent
    • Application Architect
    • Pull Request Agent
    • Human Feedback Agent

Format: .event.json, .trace.yaml, .review.md


βœ… Summary

The Tech Lead Agent runs on a modern, cloud-native, AI-augmented platform using:

  • Semantic Kernel as its brain
  • OpenAI/Azure OpenAI as its reasoning core
  • .NET Core + MCP Servers for deterministic rule enforcement
  • Azure and GitHub for infrastructure, secret, and deployment automation
  • OpenTelemetry for full observability and traceability

🧠 System Prompt

The system prompt is the foundational instruction that bootstraps the Tech Lead Agent's identity, behavior, and ruleset within the Semantic Kernel.

It encodes:

  • The agent’s responsibilities
  • The standards it enforces
  • The scope of validation
  • The tone of communication
  • The expected response behavior in success, failure, or escalation

πŸ“œ System Prompt (Full)

You are the **Tech Lead Agent** in the ConnectSoft AI Software Factory.

Your responsibility is to act as the **technical validator, reviewer, and orchestrator** across all engineering outputs created by other AI agents or developers.

You do **not** write business logic or generate new features. Instead, you must:

βœ… Enforce **Clean Architecture** layering and port boundaries  
βœ… Validate **use case handler implementation** against expected ports  
βœ… Enforce **test presence** (unit, integration, optional BDD) for all new features  
βœ… Ensure **traceability** by verifying presence of `trace_id`, `correlation_id`, and `blueprint_id`  
βœ… Apply **security validation** (authorization attributes, tenant isolation, scoped guards)  
βœ… Flag and reject any adapter or handler that violates **domain boundaries**  
βœ… Collaborate with Generator, Adapter, Committer, and QA agents  
βœ… Annotate pull requests, generate structured reports, and emit YAML/JSON results  
βœ… Escalate to a human if rules conflict, inputs are incomplete, or multiple agents produce conflicting results

Never approve a pull request or code bundle unless all architecture, test, trace, and security validations pass.

If validation fails, you must either:
- Suggest regeneration (targeting the responsible agent)
- Emit a structured error (`pull-request-review.yaml`)
- Escalate to a human reviewer

Always sign your output with a `trace_id`, `agent_origin`, and validation summary.
Respond only in structured Markdown, YAML, or event JSON formats.

Be strict, consistent, and traceable.

🧩 Key Behaviors Encoded

Behavior Included in Prompt
Enforces test discipline βœ…
Enforces trace metadata βœ…
Validates adapters and ports βœ…
Knows how to reject PRs βœ…
Suggests agent retries βœ…
Escalates intelligently βœ…
Stays non-generative βœ… (clearly avoids code generation)
Emits structured output only βœ…

πŸ” Safety Guardrails (Encoded)

  • πŸ›‘ Cannot create new features
  • πŸ” Must escalate on ambiguous inputs
  • πŸ’¬ Must never hallucinate human instructions or override architecture

πŸ’‘ Execution Modes

The system prompt adapts to:

Mode Trigger
Validation Mode Invoked by Committer or Generator Agent
Review Mode Invoked during PR open/update
Escalation Mode Triggered on architecture conflict or broken trace
Correction Mode Triggered after suggest_regeneration emitted

πŸ”Ž Signature Fields in All Outputs

The system prompt enforces these tags in every output:

agent_origin: tech-lead-agent
trace_id: {{UUID or input trace}}
validation_status: passed | failed | escalated
generated_at: {{timestamp}}

βœ… Summary

The Tech Lead Agent’s system prompt defines it as a:

  • Non-generative, architecture-enforcing, test-aware, trace-enforcing validator
  • Operating under strict ConnectSoft Clean Architecture and DDD principles
  • Producing structured outputs for automated PR control and agent orchestration
  • Escalating responsibly and maintaining auditability

🧾 Input Prompt Template

The input prompt template defines how the Tech Lead Agent receives and interprets external instructions or context from:

  • Other agents (Generator, Adapter, Committer, QA)
  • Developers or orchestrators
  • CI pipelines or GitOps triggers

It provides the structure and schema for invoking the Tech Lead Agent via Semantic Kernel, MCP servers, or agent skill chaining.


πŸ“₯ Structure

πŸ“„ Prompt Template Format

You are receiving a validation request from the ConnectSoft Software Factory.

## πŸ“¦ Submission Metadata
- Blueprint ID: {{blueprint_id}}
- Trace ID: {{trace_id}}
- Service Name: {{service_name}}
- Agent Origin: {{calling_agent}}

## πŸ“ Artifacts Provided
- Ports File: {{input_ports_path}}
- Adapters Manifest: {{adapter_manifest_path}}
- Domain Model File: {{domain_model_path}}
- Use Case Handler: {{handler_file_path}}
- Test Matrix: {{test_matrix_path}}
- Trace Metadata: {{trace_file_path}}
- Authorization Map: {{auth_map_path}}

## πŸ§ͺ Validation Scope
- Architecture: true
- Traceability: true
- Test Enforcement: true
- Security Checks: true

## 🎯 Objective
Perform a full validation. If all rules pass, emit a `pull-request-review.yaml` with `status: passed`.  
If any fail, emit a detailed `validation-report.md` and `pull-request-review.yaml` with `status: failed`.  
If required, emit `CustomGenerationRequired` or `HumanEscalationSuggested`.

Respond with:
- Markdown + YAML report (for PR Agent)
- Event JSON output (for Generator or Escalation)

πŸ’‘ Dynamic Prompt Injection

The input prompt is templated with runtime values injected by the caller:

Field Description
{{blueprint_id}} Vision-aligned feature or use case ID
{{trace_id}} Trace linkage for observability
{{service_name}} The microservice under review
{{calling_agent}} Source agent or orchestrator
{{*_path}} Relative file paths in the repo or context bundle

πŸ“Ž Example (Filled)

You are receiving a validation request from the ConnectSoft Software Factory.

## πŸ“¦ Submission Metadata
- Blueprint ID: usecase-9241
- Trace ID: invoice-2025-0192
- Service Name: BillingService
- Agent Origin: GeneratorAgent

## πŸ“ Artifacts Provided
- Ports File: ports/input-ports.json
- Adapters Manifest: adapters/adapters-manifest.json
- Domain Model File: domain/domain-model.cs
- Use Case Handler: usecases/CreateInvoiceHandler.cs
- Test Matrix: tests/test-matrix.json
- Trace Metadata: traces/trace-metadata.json
- Authorization Map: config/authorization-map.yaml

## πŸ§ͺ Validation Scope
- Architecture: true
- Traceability: true
- Test Enforcement: true
- Security Checks: true

## 🎯 Objective
Perform a full validation. If all rules pass, emit a `pull-request-review.yaml` with `status: passed`.  
If any fail, emit a detailed `validation-report.md` and `pull-request-review.yaml` with `status: failed`.

πŸ” Prompt Use Cases

Trigger Injected Prompt
Generator finishes use case Full template with handler, ports, trace
Adapter agent finishes integration Template scoped to adapter structure and DTO usage
Committer preparing PR Comprehensive validation with full diff bundle
Test Agent completes BDD Includes test-matrix and trace coverage summary
Human requests recheck Includes escalation trace and manual override flag

🧠 Prompt Hints (AI Instruction Modifiers)

  • Respond strictly with YAML and Markdown only.
  • Explain each failed rule with code reference and trace ID.
  • If missing trace or tests, mark as failed.
  • If blueprint is unlinked, escalate.

These hints guide the LLM's tone and deterministic structure compliance.


βœ… Summary

The Tech Lead Agent receives highly structured prompts that:

  • Include semantic trace, ports, adapters, domain model, tests, and policies
  • Instruct it to validate, trace, escalate, or orchestrate regeneration
  • Ensure the output is fully machine-readable and CI/CD compliant

🎯 Output Expectations

This section defines the structure, format, and style of outputs produced by the Tech Lead Agent.

All outputs are:

  • βœ… Machine-readable (YAML / JSON)
  • βœ… Human-auditable (Markdown)
  • βœ… CI/CD-compatible (e.g., GitHub, Azure DevOps PR annotations)
  • βœ… Traceable (using trace_id, agent_origin, blueprint_id)
  • βœ… Agent-to-agent interoperable (standard .event.json contracts)

πŸ“‚ Primary Output Types

Output Format Purpose
.md (Markdown) Annotated validation reports, human-friendly summaries
.yaml PR agent feedback, structured review outcomes
.json Agent events: rejection, escalation, regeneration
.log / .trace.json Trace ID linkage, rule violations, metadata tags

πŸ“„ Markdown Report β€” validation-report.md

Example Structure:

# ❌ Tech Lead Validation Report β€” BillingService

**Blueprint ID**: usecase-9241  
**Trace ID**: invoice-2025-0192  
**Service**: BillingService  
**Validation Time**: 2025-05-03T14:22Z  
**Agent**: tech-lead-agent

---

## ❗ Failed Checks

- [ ] **Authorization Missing** on `CreateInvoiceHandler.cs`
- [x] Port Implementation: `IHandle<CreateInvoiceInput>` β†’ βœ…
- [ ] Test Coverage: `integration test` missing for handler
- [x] Trace Metadata: `trace_id` found and valid β†’ βœ…

---

## πŸ”§ Recommendations

- Add `[Authorize]` to handler or controller
- Ensure handler includes at least 1 integration test
- Retry with updated code or escalate to human

---

πŸ“‘ YAML Review Output β€” pull-request-review.yaml

Standard Format:

status: failed
service: BillingService
blueprint_id: usecase-9241
trace_id: invoice-2025-0192
agent_origin: tech-lead-agent
timestamp: 2025-05-03T14:22Z

violations:
  - code: SEC005
    description: Missing `[Authorize]` on handler
    file: UseCases/CreateInvoiceHandler.cs
  - code: TEST003
    description: Missing integration test for handler
    file: Tests/CreateInvoiceTests.cs

actions:
  - suggest_regeneration
  - escalate_to_human

βœ… This file is consumed by:

  • Pull Request Agent
  • Committer Agent
  • Pipeline step for gated merge

🧠 JSON Events β€” *.event.json

Used for inter-agent messaging.

Example: PullRequestRejectedDueToMissingTest.event.json

{
  "event": "PullRequestRejectedDueToMissingTest",
  "trace_id": "invoice-2025-0192",
  "blueprint_id": "usecase-9241",
  "service": "BillingService",
  "agent_origin": "tech-lead-agent",
  "violations": [
    {
      "rule_id": "TEST003",
      "description": "Integration test is missing for CreateInvoiceHandler"
    }
  ],
  "timestamp": "2025-05-03T14:22:10Z"
}

βœ… These events trigger:

  • Generator retries
  • Human escalation
  • Notification pipeline entries

πŸ§ͺ Trace Log β€” trace-validation-log.json

Structured trace linkage report:

{
  "trace_id": "invoice-2025-0192",
  "blueprint_id": "usecase-9241",
  "aggregate": "Invoice",
  "handler": "CreateInvoiceHandler",
  "port": "CreateInvoice",
  "adapter": "HttpPost",
  "dto": "CreateInvoiceInput",
  "tests": ["CreateInvoiceTests.cs"],
  "status": "incomplete",
  "missing": ["integration_test", "authorize_attribute"]
}

βœ… Used by Observability agents and to create lineage dashboards


🧩 Output Signature Tags (Required)

All outputs must include:

trace_id: ...
agent_origin: tech-lead-agent
generated_at: {{timestamp}}
status: passed | failed | escalate
blueprint_id: ...

🧠 AI Consumption Notes

  • Markdown is human-reviewed but also SK-readable
  • YAML and JSON are passed directly to agents and orchestrators
  • Fields follow ConnectSoft schema specs and are version-controlled

βœ… Summary

The Tech Lead Agent produces:

  • πŸ“ Human-auditable reports (Markdown)
  • βœ… Pipeline-integrated outcomes (YAML)
  • 🧠 Agent messages (JSON)
  • πŸ“Š Trace and rule logs (for observability)

All outputs follow ConnectSoft standard contracts and are signed with agent + trace metadata for auditability.


🧠 Memory Management

The Tech Lead Agent manages both short-term (contextual) and long-term (persistent) memory to:

  • 🧭 Maintain execution state across validation steps
  • πŸ” Reuse trace and validation history
  • πŸ“Š Inform retry decisions and avoid repeated failures
  • 🧠 Enable learning from past escalations, resolutions, and overrides

This dual-memory strategy ensures stateful validation orchestration at scale, even across thousands of simultaneous service workflows.


🧠 Memory Types

Memory Type Scope Description
πŸ”„ Short-Term (Contextual) Runtime (in-session) Used for reasoning about the current request: ports, traces, adapters, violations
🧠 Long-Term (Persistent) Cross-validation, cross-service Used for remembering past PRs, overrides, common violations, architectural patterns

πŸ”„ Short-Term Memory (Execution Context)

Stored in the Semantic Kernel context during one validation cycle.

Includes:

  • trace_id, blueprint_id, service_name
  • Loaded port and adapter manifests
  • Temporary validation results
  • Contextual skill state (e.g., last failed rule)

βœ… Reset after each validation pass βœ… Maintained in memory for multi-phase validation flows


🧠 Long-Term Memory (Persistent Stores)

Stored in external backends:

Memory Type Backend
Validation history Azure Table / NoSQL store
Prior trace lineage Redis / Distributed cache
Rule violation patterns Vector DB (Qdrant / Pinecone)
Escalation overrides JSON blob or policy database
Annotated test gaps SpecFlow DB or test coverage index

πŸ—‚ Memory Schema: memory/validation-history/{service}/{trace_id}.json

{
  "trace_id": "invoice-2025-0192",
  "status": "failed",
  "violations": ["TEST003", "SEC005"],
  "actions": ["suggest_regeneration"],
  "last_attempt": "2025-05-03T14:22Z",
  "agent": "tech-lead-agent"
}

βœ… Used to inform whether to retry or escalate βœ… Allows agents to track resolution attempts and blockers


πŸ” Memory Use in Retry Flows

When re-validating the same service trace:

  • Compare current inputs to prior failed attempt
  • If same error recurs β†’ escalate instead of retry
  • If new changes fix prior violations β†’ approve

βœ… Prevents infinite regeneration loops βœ… Supports AI-based resolution path learning


πŸ” Memory Indexing for Reasoning

Vector similarity queries may be used for:

  • β€œHave we seen this trace structure before?”
  • β€œWhat test files typically cover this handler?”
  • β€œWas this rule overridden for this tenant or edition in the past?”

Skills like recommendRegenerationStrategy() use indexed memories to determine fix strategy.


🧠 Memory-Aware Prompting

The agent injects memory facts into validation context:

[Memory Hint] This handler failed in the last validation run due to missing [Authorize].
If unchanged, escalate to human. If fixed, re-validate only tests.

πŸ“¦ Optional Memory Plug-ins

Plug-in Purpose
SkillExecutionLogStore Track skill-level execution traces
PRValidationLedger Central ledger of pass/fail approvals
ViolationSummaryIndex Most common DDD/architecture violations by domain
ServiceMemoryStore Agent-level key-value store per microservice

βœ… Summary

The Tech Lead Agent uses:

  • πŸ”„ Short-term memory for in-session validation reasoning
  • 🧠 Long-term memory for historical tracking, retry safety, and escalation context
  • πŸ“Š Memory is multi-source and observable, ensuring accuracy and cross-validation resilience

🎯 Validation Logic

The Tech Lead Agent enforces a set of architectural, functional, security, and quality rules using a layered validation engine. This logic ensures that every microservice is:

  • βœ… Aligned with Clean Architecture
  • βœ… Properly tested and secured
  • βœ… Traceable to its blueprint and execution lineage
  • βœ… Ready for integration and deployment

πŸ§ͺ Validation Layers

graph TD
    Start[Start Validation]
    A[Architecture Validation]
    B[Traceability Validation]
    C[Test Coverage Validation]
    D[Security Enforcement]
    E[DTO & Adapter Rules]
    F[Validation Summary + Decision]

    Start --> A --> B --> C --> D --> E --> F
Hold "Alt" / "Option" to enable pan & zoom

Each layer contributes rules, violation reports, and suggested fix paths.


βœ… Architecture Rules

Rule ID Description
ARC001 DomainModel must not be referenced in Adapter layer
ARC002 Use cases must implement declared ports (e.g., IHandle<T>)
ARC003 Ports must exist in ApplicationModel, not in InfrastructureModel
ARC004 Adapters must resolve ports via DI, not direct instantiation
ARC005 DTO ↔ Entity separation must be enforced at boundaries

πŸ“­ Traceability Rules

Rule ID Description
TRC001 All handlers must carry trace_id, correlation_id, tenant_id
TRC002 Events and messages must forward trace headers
TRC003 Every artifact must link to blueprint_id
TRC004 PR commits must include trace_id reference in message or metadata

πŸ§ͺ Testing Rules

Rule ID Description
TEST001 All new use case handlers require unit test coverage
TEST002 All handlers with external side effects must have integration tests
TEST003 Feature-level SpecFlow (BDD) required if the blueprint type = UserStory
TEST004 Handlers missing coverage emit TestCoverageViolationDetected
TEST005 Coverage must exceed minimum threshold (e.g., 80%) if --StrictValidation is true

πŸ” Security Rules

Rule ID Description
SEC001 All controller routes must have [Authorize] attribute
SEC002 Tenant-aware handlers must validate tenant ownership
SEC003 Any service with IUserContext must verify roles or scopes
SEC004 SignalR or gRPC endpoints must enforce token validation
SEC005 Authorization metadata must match authorization-map.yaml

🧩 DTO and Adapter Rules

Rule ID Description
DTO001 No domain entity should be serialized in public API response
DTO002 DTO naming convention: VerbNounInput, NounDto, EventPayload
ADP001 Adapters must not inject DbContext or domain repositories directly
ADP002 No usage of domain services inside adapter components
ADP003 Adapter interfaces must conform to port contracts declared in input-ports.json

🧠 Rule Execution Engine

The agent applies all rules via:

  • Semantic Kernel function chaining (SK planner)
  • Custom validators and analyzers (SK skills or .NET analyzers)
  • Rule result aggregation into report output
  • Auto-suggestion for fix paths (suggest_regeneration, escalate_to_human)

Each rule returns:

  • passed, failed, or skipped
  • evidence (file, line, trace ID, reason)
  • rule_id, description, severity

πŸ“Š Result Aggregation Logic

All validation results are scored and aggregated into:

  • validation-report.md
  • pull-request-review.yaml
  • Optional: validation-score.json (with weights, e.g., for CI gates)

Decision Criteria:

Condition Output
No violations βœ… Approve PR
Critical rule(s) fail ❌ Reject PR
Rule conflicts or unknown trace ⚠️ Escalate to human
Non-critical violations only 🟑 Approve with warning (optional config)

🧠 Example Output Entry

{
  "rule_id": "ARC001",
  "description": "Domain model referenced in adapter",
  "file": "Adapters/OrderAdapter.cs",
  "line": 42,
  "status": "failed",
  "suggestion": "Move domain usage into use case and expose via port"
}

βœ… Summary

The Tech Lead Agent uses a layered validation engine to apply:

  • Architectural rules
  • Trace compliance checks
  • Test and coverage enforcement
  • Security and access policies
  • DTO and adapter boundaries

All rules are versioned, structured, and linked to output artifacts, ensuring full transparency and retry-aware execution.


🎯 Retry and Correction Flow

The Tech Lead Agent is designed to not only detect validation failures β€” but also to trigger intelligent retries, regeneration, or escalation based on:

  • βœ… Severity of the violation
  • βœ… Agent-resolvable issues vs. manual fixes
  • βœ… Prior validation attempts
  • βœ… Memory and trace metadata

This enables the ConnectSoft Factory to self-correct misalignments autonomously, while involving humans only when necessary.


πŸ” Retry Decision Flow

flowchart TD
    Start([Validation Fails])
    CheckMemory{{Was this error seen before?}}
    CheckAgentFix{{Can another agent fix it?}}
    Retry(Trigger agent retry)
    Escalate(Escalate to human)
    Block(Hard reject)
    Pass(Validation passed)

    Start --> CheckMemory
    CheckMemory -->|First time| CheckAgentFix
    CheckMemory -->|Repeated failure| Escalate
    CheckAgentFix -->|Yes| Retry
    CheckAgentFix -->|No| Escalate
Hold "Alt" / "Option" to enable pan & zoom

🧠 Retry Types

Failure Type Retry Action
πŸ§ͺ Missing test Trigger TestGeneratorAgent with use case context
πŸ“­ Misaligned port Trigger MicroserviceGeneratorAgent to regenerate handler
πŸ” Missing auth Trigger SecurityPolicyAgent to inject policy guard
🧱 Wrong adapter layer Notify AdapterGeneratorAgent for restructuring
πŸ” Missing trace tag Inject metadata or route to TraceAgent
πŸ€·β€β™‚οΈ Unclear violation Escalate to human reviewer

πŸ“¦ Correction Event: CustomGenerationRequired

JSON Example:

{
  "event": "CustomGenerationRequired",
  "agent_origin": "tech-lead-agent",
  "target_agent": "MicroserviceGeneratorAgent",
  "trace_id": "billing-2025-0142",
  "blueprint_id": "usecase-7342",
  "fix_type": "regenerate_handler",
  "reason": "Missing or incorrect port implementation for CreateInvoiceHandler"
}

βœ… Routed via agent mesh β†’ Planner β†’ Retry agent


πŸ§ͺ Correction Output: suggest_regeneration in YAML

YAML snippet:

status: failed
actions:
  - suggest_regeneration
  - target_agent: AdapterGeneratorAgent
    reason: Adapter uses domain logic directly

🧠 Retry Limits & Memory Check

To avoid infinite loops:

  • Validation memory stores retry counts by trace_id
  • After 2 attempts for the same violation:
    • β›” Escalate instead of retry
    • πŸ”Ž Include prior resolution attempts in context

Memory schema example:

{
  "trace_id": "billing-2025-0142",
  "retry_count": 2,
  "last_result": "failed",
  "last_agent": "AdapterGeneratorAgent",
  "next_action": "escalate"
}

🧩 Escalation Event: HumanEscalationSuggested

Triggered when:

  • Errors are ambiguous or unresolved
  • Multiple agents produce conflicting artifacts
  • Blueprint context is missing or invalid
  • Retry loop has exhausted safe attempts

Escalation includes:

  • Full validation report
  • Summary of previous retries
  • Suggestions for manual correction

πŸ”„ Regeneration Capabilities

Agent Can Fix
MicroserviceGeneratorAgent Handlers, DTOs, port bindings
AdapterGeneratorAgent Adapter cleanup, port correction
SecurityPolicyAgent [Authorize], Roles, tenant guards
TestGeneratorAgent Unit/integration test generation
TraceAgent Inject trace_id, validate propagation
Human Architect Override rule, bypass gating, explain intent

🧠 Summary Table: Retry or Escalate?

Violation Retry Escalate
❌ Missing integration test βœ… β€”
🧱 Adapter breaks layer rule βœ… β€”
❓ Unknown port name β€” βœ…
πŸ” Repeated failed validation β€” βœ…
πŸ§ͺ Invalid trace match βœ… If no blueprint found β†’ escalate
πŸ” Security policy conflict β€” βœ…

βœ… Summary

The Tech Lead Agent includes a smart, memory-driven retry and correction system that:

  • Automatically triggers responsible agent corrections
  • Limits retries to avoid loops
  • Escalates unresolved issues with full traceability
  • Makes the ConnectSoft Software Factory resilient and self-healing

🀝 Collaboration Interfaces

The Tech Lead Agent acts as a central reviewer and coordinator, interfacing with multiple other agents and systems.

It uses event-driven messaging, Semantic Kernel skill chaining, and traceable payload contracts to enable automated orchestration, retries, approvals, and escalations.


πŸ“‘ Collaboration Topology

graph TD
  GeneratorAgent --> TechLead
  AdapterGeneratorAgent --> TechLead
  BackendDeveloperAgent --> TechLead
  CodeCommitterAgent --> TechLead
  TestGeneratorAgent --> TechLead
  PullRequestAgent --> TechLead
  TechLead --> GeneratorAgent
  TechLead --> AdapterGeneratorAgent
  TechLead --> PullRequestAgent
  TechLead --> HumanReviewer
Hold "Alt" / "Option" to enable pan & zoom

βœ… The Tech Lead Agent receives engineering artifacts βœ… Validates them βœ… Routes actions to other agents or human interfaces


πŸ“¬ Supported Message Types

Message/Event Direction Purpose
FeatureGeneratedEvent ← Generator β†’ TL Trigger validation of generated code
AdapterReadyEvent ← Adapter Generator β†’ TL Check adapter layering and port usage
PullRequestReadyToValidate ← Committer β†’ TL Final review before PR approval
ValidationReportProduced β†’ PR Agent Review YAML + Markdown to approve/reject
CustomGenerationRequired β†’ Generator/Adapter Retry with fix strategy
HumanEscalationSuggested β†’ Human Reviewer Manual override or decision required

πŸ”Œ Inter-Agent Interface Contracts

All communication uses ConnectSoft Agent Event Schema, typically in:

  • *.event.json (inter-agent triggers)
  • *.trace.json (trace reports)
  • *.yaml (PR review)
  • *.md (developer feedback)

πŸ“€ Emitted to: Pull Request Agent

Purpose: Approve or reject merge

status: passed | failed
trace_id: ...
blueprint_id: ...
violations: [...]
actions: [...]
agent_origin: tech-lead-agent

πŸ” Emitted to: Generator or Adapter Agent

Purpose: Request regeneration/fix

{
  "event": "CustomGenerationRequired",
  "target_agent": "AdapterGeneratorAgent",
  "reason": "Domain model used in adapter layer",
  "trace_id": "order-2025-00341"
}

πŸ™‹ Emitted to: Human Reviewer Agent

Purpose: Escalate ambiguous or repeated validation failures

{
  "event": "HumanEscalationSuggested",
  "trace_id": "billing-2025-0924",
  "reason": "Adapter and handler disagree on port mapping",
  "previous_attempts": 2,
  "agent_origin": "tech-lead-agent"
}

πŸ” Skill Chaining & Semantic Kernel

When invoked within a planner, the Tech Lead Agent exposes skills such as:

  • validateHandler(handlerPath)
  • checkAuthorization(route)
  • getValidationSummary(trace_id)
  • routeCorrection(targetAgent)
  • recommendHumanReview()

These are callable via SK kernel.InvokeAsync(...) or MCP-backed planners.


πŸ“‘ API-Level Integration (Optional)

If connected via orchestrator or API gateway, the agent can expose:

  • POST /validate β€” submit bundle for review
  • GET /report/{trace_id} β€” retrieve result
  • POST /escalate β€” trigger manual override flow

Authentication: JWT with agent identity or pipeline role claim Headers: X-Trace-ID, X-Blueprint-ID, X-Agent-Origin


🧠 Message Routing Strategy

Scenario Routed To
Handler violates port contract β†’ MicroserviceGeneratorAgent
Adapter misuses domain entity β†’ AdapterGeneratorAgent
Missing test file β†’ TestGeneratorAgent
Auth check fails β†’ SecurityPolicyAgent or escalate
Conflicting artifacts β†’ HumanReviewerAgent

βœ… Summary

The Tech Lead Agent collaborates with:

  • Engineering agents (Generator, Adapter, Committer)
  • Coordinators (Pull Request Agent, Planner)
  • Optional Human Reviewer (when ambiguity exists)

It uses traceable contracts, SK skill chaining, and event-driven messaging to keep the ConnectSoft engineering lifecycle safe, traceable, and autonomous.


🎯 Observability Hooks

To operate at scale and with accountability, the Tech Lead Agent exposes rich observability interfaces, ensuring that:

  • πŸ“Š Every validation step is traceable
  • 🧠 Each decision is explainable
  • 🚦 Failures, retries, and escalations are logged and measurable
  • πŸ“ˆ Telemetry feeds into factory-wide dashboards and audit systems

This aligns with ConnectSoft’s principle: Observability-First by Design.


πŸ“Š Observability Dimensions

Dimension Details
Tracing OpenTelemetry spans, correlation with trace_id
Logging Structured logs with agent + service context
Metrics Prometheus-style metrics for success/failure, latency
Audit Trail Per-trace report, validation decisions, retry counts
Dashboards Grafana / Azure Monitor / SK Planner UI integration

πŸ” Distributed Tracing

Tracing Tool Status
OpenTelemetry βœ… Enabled
Application Insights βœ… Optional
Jaeger / Zipkin βœ… Compatible
traceparent / tracestate headers βœ… Injected

Each validation run includes:

  • agent_origin: tech-lead-agent
  • trace_id: {{from blueprint or orchestrator}}
  • span_id for each major phase (e.g., validatePorts, checkTests, outputReview)

Sample Trace Flow

sequenceDiagram
    participant Generator
    participant TechLeadAgent
    participant AdapterGen
    participant PRAgent

    Generator->>TechLeadAgent: FeatureGeneratedEvent
    activate TechLeadAgent
    TechLeadAgent->>TechLeadAgent: validatePort()
    TechLeadAgent->>TechLeadAgent: checkAuthorization()
    TechLeadAgent->>AdapterGen: emit CustomGenerationRequired
    TechLeadAgent->>PRAgent: emit pull-request-review.yaml
    deactivate TechLeadAgent
Hold "Alt" / "Option" to enable pan & zoom

βœ… All steps wrapped in spans with duration and trace context


πŸ“‹ Logging Fields

Field Description
trace_id Connects output to blueprint and service
agent_origin tech-lead-agent
status passed, failed, escalated, retry
validation_rule Rule ID that triggered log
file, line, violation_code If applicable
duration_ms Time per validation phase
related_agent For routing or blame purposes

Format: JSON log entries via Serilog, output to console + sink (File / Azure)


πŸ“ˆ Metrics Exported

Metric Name Description
techlead_validations_total Total validation runs (labels: status, service)
techlead_rule_violations_total Count of rule hits per rule ID
techlead_validation_duration_seconds Histogram of validation time
techlead_escalations_total Number of human escalations
techlead_retry_loops_total Retry cycles triggered
techlead_handler_failures_total Failed handlers by trace group

βœ… Scraped by Prometheus βœ… Visualized in Grafana or Azure Dashboard tiles


🧾 Validation Audit Report Store

All reports are stored in:

/audit/validation-reports/{service}/{trace_id}/
  β”œβ”€β”€ validation-report.md
  β”œβ”€β”€ pull-request-review.yaml
  β”œβ”€β”€ trace-validation-log.json
  └── validation-history.json

βœ… Enables postmortem reviews βœ… Supports time-based rollups and trace lineage comparisons


πŸ“Š Dashboard Views (Grafana / Azure)

Panel Purpose
πŸ” Retry Rates Spot loops or frequent agent failures
❗ Top Failing Rules Focus dev/agent improvements
🟑 Escalation Trends Investigate ambiguous workflows
⏱ Validation Duration Detect slow paths or bloated flows
πŸ“ˆ Trace Coverage Ensure every trace_id is fully linked across flows

🧠 Planner Feedback

If used within Semantic Kernel orchestration:

  • Sends ValidationSummary object back into planner context
  • Can be consumed by AI QA Agent, CI Planner, or Release Safety Checker
{
  "trace_id": "...",
  "status": "failed",
  "failed_rules": ["ARC001", "TEST003"],
  "next_action": "suggest_regeneration"
}

βœ… Summary

The Tech Lead Agent includes full-spectrum observability hooks:

  • βœ… OpenTelemetry tracing
  • βœ… Structured logging (Serilog, JSON)
  • βœ… Metrics (Prometheus-compatible)
  • βœ… Per-service validation audit reports
  • βœ… Planner-compatible summaries

Making it an auditable, diagnosable, and safety-critical part of the AI Software Factory.


πŸ™‹ Human Intervention Hooks

While the Tech Lead Agent is designed for autonomous validation and correction, there are edge cases where human review or override is required.

Human Intervention Hooks provide:

  • β›” Failsafe mechanisms for ambiguous or risky decisions
  • 🧠 Feedback loops for agent improvement
  • πŸ”„ Support for policy exceptions, fast-lane approvals, or developer intent clarification

These hooks ensure trust, transparency, and oversight in the AI-governed engineering lifecycle.


πŸ” Human Intervention Scenarios

Scenario Description
❓ Ambiguous rule result Conflicting signals or incomplete context
βš”οΈ Conflicting agent outputs Two agents generate incompatible artifacts
πŸ§ͺ Repeated validation failure Same trace_id fails multiple times with no progress
πŸ” Security decision override Possible authorization bypass or platform-specific exception
🧱 Architecture rule exemption Intentional deviation (e.g., performance, legacy system)
πŸ’¬ Developer clarification required Missing annotations, unclear file ownership, unknown adapter

πŸ”” Triggered Event: HumanEscalationSuggested

Emitted by the Tech Lead Agent:

{
  "event": "HumanEscalationSuggested",
  "trace_id": "billing-2025-0192",
  "blueprint_id": "usecase-7342",
  "reason": "Repeated failure to meet port-layer separation for handler",
  "previous_attempts": 2,
  "last_agent_attempted": "AdapterGeneratorAgent",
  "violations": ["ARC001", "DTO002"],
  "timestamp": "2025-05-03T14:55:03Z"
}

βœ… Routed to:

  • HumanReviewerAgent
  • Web dashboard (approval queue)
  • Slack/Teams via webhook (if configured)

πŸ§‘β€πŸ’Ό Human Interfaces Supported

Channel Action
Web UI (Approval Dashboard) View validation reports, override or send back
PR UI (GitHub/Azure DevOps) PR comment with AI summary, approve manually
ChatOps (Slack/Teams) Button: βœ… Approve, ❌ Reject, πŸ” Retry with agent
Email Summary Escalation notification with links to report

πŸ”“ Override Path

If a human overrides the agent:

  1. Override must include:
    • Reason
    • Reviewer name or ID
    • Scope of exemption (e.g., single handler, full service)
    • Time limit (optional)
  2. Logged to: manual-overrides/{trace_id}.yaml
  3. Validation report is updated with note:
manual_override:
  by: sarah.architect@connectsoft.ai
  date: 2025-05-03
  reason: Legacy handler β€” adapter rule exempted
  scope: CreateInvoiceHandler.cs only
  1. Future runs detect override and skip rule(s) accordingly.

πŸ“₯ Feedback Loop to Agent Memory

Every escalation and human decision is:

  • Stored in vector memory / override index
  • Used to avoid repeat escalations for same pattern
  • Indexed with tags: approved_with_exception, security_override, architecture_exempt

βœ… Used by planner to adapt future retries and flows


🧠 Agent Prompt Injection (Post-Escalation)

[Memory] This handler was previously escalated due to ARC001 violation.
A human reviewer approved it with a justification.
Do not revalidate this rule again for this file.

βœ… Summary

The Human Intervention Hooks enable:

  • Clear escalation triggers
  • Traceable override approvals
  • Structured exemption recording
  • Secure and human-readable interfaces
  • Memory-powered adaptive behavior in future validations

The Tech Lead Agent remains autonomous-first but human-aware, creating a balance between AI governance and human engineering oversight.