Skip to content

🧠 Test Case Generator Agent Specification

🎯 Purpose

The Test Case Generator Agent is responsible for the autonomous creation of structured, executable, and traceable test cases for:

  • πŸ§ͺ Unit tests
  • 🌐 Integration tests
  • πŸ”„ End-to-End flows
  • πŸ“˜ BDD scenarios
  • ⚠️ Edge and error condition validations

Its mission is to ensure every use case, handler, adapter, and event produced by the factory is accompanied by a corresponding test plan and test implementation scaffold β€” ready for execution, validation, and review.


🧩 Strategic Role in the Factory

This agent:

  • Bridges the gap between architecture, implementation, and validation
  • Collaborates with the QA Engineer, Backend Developer, Frontend Developer, Bug Resolver, and Microservice Generator agents
  • Ensures test coverage and behavior are aligned with specifications, blueprints, and real execution paths
  • Enables quality enforcement through agent-validated traceable test metadata

🧠 The agent acts as the first line of defense against regressions and invalid logic β€” providing the foundation for automated quality gates in CI/CD pipelines and Studio dashboards.


πŸ“ Position in Execution Flow

flowchart TD
    VisionBlueprint --> MicroserviceGenerator
    MicroserviceGenerator --> BackendDeveloper
    BackendDeveloper --> TestCaseGenerator
    TestCaseGenerator --> QAEngineer
    TestCaseGenerator --> PRAgent
    TestCaseGenerator --> TestCoverageValidator
Hold "Alt" / "Option" to enable pan & zoom

βœ… Activated after use case handlers and ports are scaffolded, and may trigger:

  • SpecFlow scenario creation
  • MSTest-based unit tests
  • Mock configuration stubs
  • BDD feature files with Given-When-Then patterns

πŸ›  Why This Agent Is Critical

Without It With It
❌ Missing or partial test coverage βœ… Guaranteed test for each handler, adapter, and service
❌ Manual test writing delays delivery βœ… Automated test plan generation in seconds
❌ No trace from feature β†’ test β†’ handler βœ… Test coverage mapped via trace_id, blueprint_id
❌ Error-prone regression handling βœ… Edge case testing, negative scenario modeling
❌ Weak QA coordination βœ… QA agents and humans operate on agent-generated artifacts

🧠 Sample Use Case

Blueprint: usecase-0371 (CreateInvoice)

β†’ After handler and controller are scaffolded:

TestCaseGeneratorAgent
πŸ“¦ Generates:
 - CreateInvoiceHandlerTests.cs
 - InvoiceControllerTests.cs
 - create_invoice.feature (BDD)
 - test-metadata.yaml (trace-aligned)

βœ… Summary

The Test Case Generator Agent enables ConnectSoft to:

  • Automate trace-aligned test coverage
  • Enforce test-first and validation-driven development
  • Support both human QA workflows and autonomous agent testing
  • Produce test artifacts ready for CI/CD, PR validation, and Studio dashboards
  • Align test execution with blueprint-defined behavior, roles, and expectations

It transforms raw handler scaffolds into validated, test-covered service units, ensuring resilience, correctness, and delivery confidence.


🏭 Strategic Position in the Factory

The Test Case Generator Agent occupies a pivotal point in the ConnectSoft AI Software Factory, positioned between code generation and quality validation. It is triggered automatically once a handler, port, adapter, or controller has been scaffolded β€” ensuring that testing becomes an integral, traceable output, not an afterthought.


πŸ” Integration with Factory Workflow

flowchart TD
    VisionAgent --> Blueprint
    Blueprint --> MicroserviceGenerator
    MicroserviceGenerator --> BackendDeveloper
    BackendDeveloper --> UseCaseHandler
    UseCaseHandler --> TestCaseGeneratorAgent
    TestCaseGeneratorAgent --> TestMetadata
    TestCaseGeneratorAgent --> QAEngineerAgent
    TestCaseGeneratorAgent --> TestCoverageValidator
    TestMetadata --> PullRequestAgent
Hold "Alt" / "Option" to enable pan & zoom

πŸ”— Core Integration Points

Upstream Agent Provides Example
Backend Developer Agent Use case handler, DTOs CreateInvoiceHandler.cs
Microservice Generator Agent Scaffolded solution structure Tests/, UseCases/, Controllers/
Application Architect Agent Port and policy definitions IHandle<CreateInvoiceInput>
QA Engineer Agent Scenario mappings and validation rules qa-checklist.yaml
Studio or Human Reviewer Correction, customization of BDD features create_invoice.feature

πŸ” Downstream Consumers of Test Artifacts

Agent Consumes Purpose
QA Engineer Agent test-metadata.yaml, .feature files Regression plan, test grid
Pull Request Agent Test case links, coverage indicators PR annotations
Bug Resolver Agent Reproducers, scenario runners Uses generated tests for regression
Test Coverage Validator Coverage metrics, assertion validation Aggregated feedback loop
Studio Interactive views for each test-to-trace map Developer + QA UI hooks

🧠 Enrichment Role

The Test Case Generator Agent doesn’t just generate static code β€” it:

  • Adds πŸ“Ž trace_id, blueprint_id, and handler metadata to each test
  • Emits πŸ“¦ structured metadata files for test traceability
  • Links use cases to human-readable and BDD-format test scenarios
  • Aligns test case naming and structure with architectural contracts

🧱 Placement by Code Layer

Code Layer Tests Generated
DomainModel (if behavior exists): domain event test cases
ApplicationModel Unit tests per handler, port, validator
InfrastructureModel Adapter-level integration tests
ServiceModel Controller tests, middleware edge-case tests

βœ… All tests follow ConnectSoft Clean Architecture rules βœ… Test files are injected into corresponding *.UnitTests/ or *.Specs/ projects


πŸ“¦ Generated File Locations

Artifact Type Location
Unit Tests Tests/ServiceName.UnitTests/UseCases/*.cs
Integration Tests Tests/ServiceName.IntegrationTests/*.cs
BDD Specs Tests/ServiceName.Specs/Features/*.feature
Test Metadata Tests/test-metadata.yaml
Studio Snapshots Studio/test-preview/*.md

πŸ” Trigger Scenarios

Factory Event Triggered Action
UseCaseHandlerGenerated Scaffold MSTest-based test
AdapterGeneratedEvent Add integration tests for that adapter
PortWithRolesDefined Generate role-based BDD scenario
MissingTestCoverageDetected Retry or emit correction plan
Blueprint.TraceValidationStarted Inject test alignment markers

βœ… Summary

The Test Case Generator Agent is tightly integrated into the ConnectSoft assembly line:

  • It reacts to microservice, handler, and adapter creation
  • It orchestrates test generation across use case, integration, and behavioral levels
  • It emits machine- and human-readable artifacts
  • It closes the feedback loop between generation, coverage, QA, and human review

This agent ensures that testing is not a phase β€” it’s a productized, traceable output of the software factory.


πŸ“‹ Responsibilities

The Test Case Generator Agent is responsible for generating complete, traceable, and structurally correct test artifacts across multiple levels of the software stack. Its primary goal is to ensure that every generated feature or handler has corresponding unit, integration, and behavioral tests β€” all aligned to the blueprint, trace ID, and service structure.


🎯 Key Responsibilities Breakdown

Responsibility Description
1. Generate Unit Tests for Use Case Handlers Create MSTest-based unit test classes for each IHandle<T> use case implementation.
2. Generate Validator Tests For each FluentValidation-based validator, generate negative and positive test cases.
3. Generate Controller-Level Tests Scaffold controller endpoint tests with mocked dependencies and coverage assertions.
4. Generate BDD .feature Files Emit Given-When-Then style .feature files for QA alignment and test readability.
5. Create Edge and Negative Tests Add tests for nulls, invalid input, business logic constraints, unauthorized access, etc.
6. Create Integration Test Stubs Prepare test harnesses that simulate HTTP/gRPC calls to validate service behavior end-to-end.
7. Emit Test Metadata Files Generate test-metadata.yaml mapping handlers β†’ tests β†’ trace IDs for CI/CD and QA agents.
8. Annotate Tests with Traceability Tags Embed trace_id, blueprint_id, handler_name in all test class headers.
9. Integrate with Studio Snapshot System Output developer-readable test summaries and snapshots for visual QA tools.
10. Support Retry and Correction Scenarios Emit retry suggestions if validation fails or coverage is insufficient.

🧩 Test Artifacts the Agent Creates

Artifact Description
CreateInvoiceHandlerTests.cs Unit test class with all success and failure flows
CreateInvoiceValidatorTests.cs Tests for NotNull, GreaterThan, etc.
InvoiceControllerTests.cs Endpoint-level tests with mocked dependencies
create_invoice.feature BDD file with functional scenario text
test-metadata.yaml Maps all handlers to their corresponding tests
TestCoverageSummary.md Markdown snapshot of test coverage and gaps

🧠 Example Responsibilities in Action

Given the following inputs:

  • Trace ID: invoice-2025-0143
  • Blueprint ID: usecase-9241
  • Handler: CreateInvoiceHandler

The agent will:

  • βœ… Generate CreateInvoiceHandlerTests.cs with 3–5 scenarios
  • βœ… Produce create_invoice.feature with 2–3 Gherkin-based scenarios
  • βœ… Emit test-metadata.yaml linking this handler to those tests
  • βœ… Auto-wire test project with MSTest and Coverlet support

πŸ“¦ Integration into CI/CD and Studio

System Output
CI/CD Pipelines Test projects + coverage collectors ready to run
Studio Dashboards Linked .feature scenarios and test snapshots
QA Agents Artifacts for regression tracking and bug reproduction
Tech Lead Agent Validation hooks for test/handler pairing completeness

πŸ”„ Reactive Responsibilities (on Retry)

If validation fails (e.g., missing test for CancelInvoiceHandler), the agent will:

  • Re-scan UseCases/ folder
  • Identify missing test class
  • Re-trigger test generation pipeline
  • Annotate retry attempt in metadata

βœ… Summary

The Test Case Generator Agent fulfills a core validation and assurance role in the AI Software Factory:

  • It translates implementation into test scaffolds
  • It ensures coverage and traceability through structured metadata
  • It empowers human QA, agent validation, and CI enforcement through proactive generation of intelligent, layered test artifacts

πŸ“₯ Inputs

To generate accurate, aligned, and traceable test cases, the Test Case Generator Agent consumes a multi-source input context that includes:

  • Source code structures
  • Agent-emitted metadata
  • Domain models and blueprints
  • Runtime configuration files
  • Architectural decisions (ports, roles, expectations)

πŸ“¦ Primary Input Categories

Input Type Description Example
Use Case Handlers Code files implementing IHandle<T> CreateInvoiceHandler.cs
Ports Configuration Input/output port declarations input-ports.json
DTOs and Validators Classes with FluentValidation rules CreateInvoiceInput.cs, CreateInvoiceValidator.cs
Controllers & Routing REST/gRPC endpoints InvoiceController.cs
Blueprint Metadata Trace ID, blueprint ID, feature and security expectations blueprint_id: usecase-9241
Trace Metadata Identifiers from microservice generation phase trace_id: invoice-2025-0143
Agent Events e.g. MicroserviceGeneratedEvent, HandlerScaffoldedEvent Used to trigger test generation
Role and Policy Maps Security expectations from Application Architect Agent authorization-map.yaml
Domain Events Events triggered by handlers InvoiceCreated, PaymentConfirmed
Example Payloads Used for BDD generation and snapshot creation create-invoice.sample.json
Service Configuration Options pattern, feature flags, endpoint toggles InvoiceSettings.cs, appsettings.json

🧠 Input Example: Ports Configuration

{
  "input_ports": [
    {
      "name": "CreateInvoice",
      "handler": "CreateInvoiceHandler",
      "dto": "CreateInvoiceInput",
      "roles": ["FinanceManager"],
      "events": ["InvoiceCreated"]
    }
  ]
}

β†’ Used to:

  • Generate handler test: CreateInvoiceHandlerTests.cs
  • Apply role test: simulate unauthorized access
  • Create .feature scenarios with role and outcome variations

πŸ“„ Blueprint & Trace ID Context

From generation-metadata.yaml:

trace_id: invoice-2025-0143
blueprint_id: usecase-9241
aggregate: Invoice
features:
  - UseAuthorization
  - UseOpenTelemetry

β†’ Used to:

  • Tag generated tests with metadata
  • Activate auth-specific scenario generation
  • Enforce handler/test mapping for Tech Lead Agent validation

πŸ” Controller Source Example

[HttpPost]
[Authorize(Roles = "FinanceManager")]
public async Task<IActionResult> Create([FromBody] CreateInvoiceInput input) {
    ...
}

β†’ Used to:

  • Assert [Authorize] is tested
  • Auto-generate integration test with 401, 403, 200 coverage

πŸ“˜ Validator Class Input

public class CreateInvoiceValidator : AbstractValidator<CreateInvoiceInput>
{
    public CreateInvoiceValidator()
    {
        RuleFor(x => x.TotalAmount).GreaterThan(0);
        RuleFor(x => x.CustomerId).NotEmpty();
    }
}

β†’ Used to:

  • Generate success and failure test cases for boundary values
  • Emit test like TotalAmount_should_be_required

πŸ“ Input Artifacts Summary

File Used For
*.cs (handlers) Test logic mapping
*.Validator.cs Positive/negative test case generation
.feature.sample Snapshot generation
trace-validation-log.json Trace + handler binding
authorization-map.yaml Security tests
test-generation-config.yaml Optional overrides (e.g., skip certain handlers)

βœ… Summary

The Test Case Generator Agent consumes a multi-layered input graph that includes:

  • πŸ“‚ Source code artifacts
  • πŸ“‘ Semantic metadata (trace, blueprint, role)
  • πŸ” Security and policy maps
  • πŸ§ͺ Sample payloads and agent messages

With these, it automatically generates test code that is meaningful, validated, and trace-aligned β€” ensuring that the factory’s output is safe, testable, and production-ready.


πŸ“€ Outputs

The Test Case Generator Agent emits a full suite of test artifacts, metadata, and traceable outputs to support execution, validation, QA, CI/CD pipelines, and Studio-based inspection.

These outputs are structured, aligned with the originating handler/controller, and embedded with factory-standard trace identifiers.


πŸ“¦ Output Categories

Output Type Description
Unit Tests MSTest test classes for each handler and validator
Integration Tests Endpoint-level tests simulating external input/output
BDD Feature Files Human-readable .feature files for QA scenarios
Test Metadata YAML/JSON map of handler-to-test relationships
Edge Case Tests Tests for negative conditions and invalid input
Authorization Tests Role- or claim-specific acceptance/rejection tests
Studio Snapshots Markdown-formatted test summaries and expected outcomes
Trace Files Trace ID–aligned metadata and coverage output

πŸ“ Directory Output Example

Tests/
β”œβ”€β”€ InvoiceService.UnitTests/
β”‚   └── UseCases/
β”‚       β”œβ”€β”€ CreateInvoiceHandlerTests.cs
β”‚       β”œβ”€β”€ CancelInvoiceHandlerTests.cs
β”‚   └── Validators/
β”‚       └── CreateInvoiceValidatorTests.cs
β”œβ”€β”€ InvoiceService.IntegrationTests/
β”‚   └── InvoiceControllerTests.cs
β”œβ”€β”€ InvoiceService.Specs/
β”‚   └── Features/
β”‚       └── create_invoice.feature
β”‚   └── Steps/
β”‚       └── CreateInvoiceSteps.cs
β”œβ”€β”€ test-metadata.yaml
└── trace-test-coverage.json

πŸ“˜ Sample Output: Unit Test (MSTest)

[TestClass]
[TraceId("invoice-2025-0143")]
[BlueprintId("usecase-9241")]
public class CreateInvoiceHandlerTests
{
    [TestMethod]
    public async Task Handle_ShouldReturnSuccess_WhenValidInputProvided()
    {
        var handler = new CreateInvoiceHandler(...);
        var input = new CreateInvoiceInput { TotalAmount = 100, CustomerId = Guid.NewGuid() };

        var result = await handler.Handle(input);

        Assert.IsTrue(result.IsSuccess);
    }

    [TestMethod]
    public async Task Handle_ShouldThrow_WhenAmountIsZero()
    {
        ...
    }
}

πŸ“˜ Sample Output: BDD .feature

Feature: Create Invoice

Scenario: Successful invoice creation
  Given a finance manager is authenticated
  When they submit a valid invoice
  Then the invoice is created
  And an "InvoiceCreated" event is published

Scenario: Unauthorized user
  Given a guest user
  When they attempt to create an invoice
  Then access is denied

βœ… Linked to: CreateInvoiceSteps.cs with step definitions


πŸ“‘ test-metadata.yaml

trace_id: invoice-2025-0143
blueprint_id: usecase-9241
tests:
  - handler: CreateInvoiceHandler
    unit_test: CreateInvoiceHandlerTests.cs
    feature_file: create_invoice.feature
    validator_test: CreateInvoiceValidatorTests.cs
    roles_tested: ["FinanceManager"]
    negative_cases: 3
  - handler: CancelInvoiceHandler
    unit_test: CancelInvoiceHandlerTests.cs
    feature_file: cancel_invoice.feature

πŸ“ˆ Coverage Trace Output

{
  "trace_id": "invoice-2025-0143",
  "handler_coverage": {
    "CreateInvoiceHandler": {
      "unit_test": true,
      "integration_test": true,
      "feature_file": true,
      "coverage_percent": 93
    },
    "CancelInvoiceHandler": {
      "unit_test": true,
      "integration_test": false,
      "feature_file": false,
      "coverage_percent": 47
    }
  }
}

Used by: QA Agent, Tech Lead Agent, Studio dashboard


πŸ“„ Studio Snapshot Example (Markdown)

### πŸ§ͺ Test Summary: CreateInvoiceHandler

- βœ… Unit Test: βœ”οΈ `CreateInvoiceHandlerTests.cs`
- βœ… Integration Test: βœ”οΈ `InvoiceControllerTests.cs`
- βœ… BDD Scenario: βœ”οΈ `create_invoice.feature`
- πŸ§ͺ Validator: `CreateInvoiceValidatorTests.cs`
- πŸ” Roles Covered: FinanceManager
- ❌ Edge Case Missing: InvalidCurrency

πŸ“Ž Trace ID: `invoice-2025-0143`
πŸ“˜ Blueprint: `usecase-9241`

βœ… Summary

The Test Case Generator Agent produces:

  • βœ… Test classes across unit, integration, and BDD layers
  • βœ… Structured test metadata for trace and agent inspection
  • βœ… Coverage artifacts for CI gates and QA regression planning
  • βœ… Studio- and PR-ready markdown snapshots
  • βœ… Fully executable test code aligned with ConnectSoft's Clean Architecture and factory traceability model

πŸ” Process Flow (High-Level)

The Test Case Generator Agent operates in a structured, event-driven workflow, tightly coupled to the software generation pipeline. It listens to handler generation events, evaluates inputs, queries memory, and emits complete test artifacts for review, validation, and execution.

The process is modular, traceable, retryable, and observable, matching the principles of the ConnectSoft AI Software Factory.


🧬 High-Level Flow Diagram

flowchart TD
    A[Handler Scaffolding Event] --> B[Trigger Test Case Generator Agent]
    B --> C[Analyze Context + Inputs]
    C --> D[Extract Test Targets]
    D --> E[Generate Test Scenarios]
    E --> F[Emit Test Artifacts (.cs, .feature, .yaml)]
    F --> G[Validate Generated Tests]
    G --> H[Write to Memory + Git]
    H --> I[Emit Events + Notify QA / Studio]
Hold "Alt" / "Option" to enable pan & zoom

πŸ” Step-by-Step Flow Summary

Phase Description
1. Triggering Agent is activated by events like HandlerScaffolded, ControllerGenerated, BlueprintUpdated
2. Context Analysis Reads trace ID, blueprint, handler structure, DTOs, validator, port config, and role policy
3. Target Extraction Determines test targets: handler logic, validation, controller flow, role expectations
4. Scenario Generation Uses SK skills to produce unit tests, integration stubs, .feature files
5. Artifact Emission Creates files (.cs, .yaml, .md, .feature) and injects them into the solution structure
6. Validation Confirms syntax, test count, trace alignment, naming convention, and coverage expectations
7. Persistence Outputs saved to: file system, memory system, and optionally Git repo or PR context
8. Notification Emits TestArtifactsGenerated and TestCoverageAvailable for other agents and Studio UI

🧠 Example Trigger and Execution

Event: HandlerScaffoldedEvent Module: PaymentsService Handler: CapturePaymentHandler Trace: payments-2025-0117

Agent behavior:

  • βœ… Finds: CapturePaymentHandler.cs, CapturePaymentValidator.cs
  • βœ… Generates:
    • CapturePaymentHandlerTests.cs
    • CapturePaymentValidatorTests.cs
    • capture_payment.feature
    • test-metadata.yaml
  • βœ… Validates placeholder preservation, authorization cases
  • βœ… Logs all actions under traceId: payments-2025-0117

🧠 Alternate Entry Points

The agent can also be activated by:

Trigger Effect
TestCoverageCheckRequested Rescans modules and adds missing test artifacts
BlueprintCompletedEvent Backfills test cases from updated blueprint
RetryTestGenerationRequested Reruns failed or incomplete generation process

🧠 Agent Guarantees

βœ… All tests map to a handler or controller βœ… Each test file includes traceable metadata βœ… Test generation is repeatable, idempotent, and memory-enriched βœ… The process is observable (OpenTelemetry spans, event hooks)


βœ… Summary

The Test Case Generator Agent follows a highly structured, clean, and orchestrated workflow:

  • Begins with handler generation
  • Ends with validated, trace-bound, executable test artifacts
  • Bridges the gap between generation, validation, and release

This high-level flow is the foundation for the detailed skill execution pipeline, which we’ll cover next.


πŸ”„ Process Flow (Detailed Lifecycle)

The Test Case Generator Agent follows a precise multi-skill execution pipeline, each stage building on the previous one with structured inputs and outputs. This lifecycle is designed to support traceability, reusability, and modularity, and is orchestrated using Semantic Kernel.


🧬 Detailed Lifecycle Diagram

flowchart LR
    A[Trigger (HandlerScaffolded)] --> B[Load Trace + Context]
    B --> C[Discover Targets]
    C --> D[Generate Unit Test Skeleton]
    D --> E[Generate Validator Tests]
    E --> F[Generate Integration Test Stubs]
    F --> G[Generate BDD .feature Files]
    G --> H[Create Test Metadata File]
    H --> I[Run Validation Pass]
    I --> J{Success?}
    J -- Yes --> K[Write to Memory + Notify QA Agent]
    J -- No --> L[Emit Retry Plan + Studio Review]
Hold "Alt" / "Option" to enable pan & zoom

🧠 Phase-by-Phase Skill Execution

Phase Skill Description
1. Initialization LoadTestContext Gathers blueprint ID, trace ID, service name, handler and validator info
2. Target Discovery ScanHandlerStructure Analyzes constructor, dependencies, DTO fields, port usage, roles
3. Unit Test Generation GenerateHandlerTests Produces MSTest unit tests for success, validation failure, exceptions
4. Validator Test Generation GenerateValidatorTests Builds tests to validate all RuleFor logic in validators
5. Integration Test Scaffolding GenerateControllerTests Adds endpoint test for controller or adapter using mocked or in-memory service
6. BDD Scenario Generation GenerateFeatureFile Emits .feature file using Gherkin syntax based on blueprint scenarios
7. Metadata Emission EmitTestMetadataFile Links each test file to the handler/controller with trace alignment
8. Validation ValidateGeneratedTests Ensures files follow structure, placeholder format, and meet coverage goals
9. Retry Flow (if needed) EmitRetrySuggestion, StudioInterventionRequested Triggered if missing coverage or critical validation errors found

πŸ“˜ Example Execution Trace

{
  "trace_id": "payments-2025-0117",
  "executed_skills": [
    "LoadTestContext",
    "ScanHandlerStructure",
    "GenerateHandlerTests",
    "GenerateValidatorTests",
    "GenerateControllerTests",
    "GenerateFeatureFile",
    "EmitTestMetadataFile",
    "ValidateGeneratedTests"
  ],
  "status": "Success",
  "missing_keys": 0,
  "coverage_percent": 100.0
}

πŸ” Conditional Logic

Condition Outcome
❌ Validator not found Skip validator test and log missing-validator-warning
❌ Controller route uses [AllowAnonymous] Skip auth-related tests
❌ DTO is complex object graph Enable deep traversal for scenario expansion
❌ Test class already exists Regenerate only missing methods (idempotent patch)

🧠 Input/Output Per Skill

Skill Input Output
GenerateHandlerTests Handler path, DTO, trace ID *.cs unit test
GenerateFeatureFile Blueprint actions, port roles *.feature + Steps.cs
EmitTestMetadataFile File paths, roles, handlers test-metadata.yaml
ValidateGeneratedTests File content coverage-report.json, status

πŸ›  Re-entrant and Idempotent Design

  • Each skill can be rerun in isolation if:

    • A new validator is added
    • An auth policy changes
    • A test class is partially deleted
  • Skills are decorated with:

    • traceId, moduleId, handlerName, skillId
    • executionTime, retryCount, status

βœ… Summary

The agent’s lifecycle is:

  • πŸ”§ Skill-driven and composable
  • πŸ” Reusable and retryable
  • 🧠 Enriched with memory and blueprint awareness
  • πŸ“¦ Output-consistent and trace-aligned

This architecture ensures resilience, maintainability, and autonomy within the factory’s test-generation flow.


🧩 Skills and Kernel Functions

The Test Case Generator Agent uses a rich set of Semantic Kernel (SK) skills and composable functions to fulfill its responsibilities across unit, integration, and scenario-based test generation. Each skill is modular, traceable, retryable, and pluggable, and contributes to generating fully aligned test artifacts.


πŸ”§ Core Skills Summary

Skill Name Purpose
LoadTestContext Initializes trace, blueprint, handler, DTO, role, and validator context
ScanHandlerStructure Inspects code structure: constructor, method parameters, logic branches
GenerateHandlerTests Creates MSTest test class with methods for success/failure/exception flows
GenerateValidatorTests Generates CreateInvoiceValidatorTests.cs with positive/negative coverage
GenerateControllerTests Builds endpoint test for REST/gRPC controller, with mocked dependencies
GenerateFeatureFile Emits .feature file in Gherkin syntax with role-aware scenario coverage
GenerateStepDefinitions Creates corresponding Steps.cs file mapped to .feature file
EmitTestMetadataFile Outputs test-metadata.yaml for trace-based linking and Studio display
ValidateGeneratedTests Verifies test syntax, key structure, naming, coverage, traceId compliance
EmitRetrySuggestion Outputs actionable retry data if validation fails or coverage is low

πŸ“˜ Kernel Function Signatures (Examples)

Task<TestContext> LoadTestContextAsync(HandlerInfo handlerInfo, TraceInfo trace)
Task<UnitTestFile> GenerateHandlerTestsAsync(TestContext context)
Task<FeatureFile> GenerateFeatureFileAsync(UseCaseBlueprint blueprint)
Task<ValidationResult> ValidateGeneratedTestsAsync(IEnumerable<TestFile> testFiles)

πŸ” Reusability and Composability

Each skill is:

  • πŸ” Re-entrant: can be re-invoked with updated inputs (e.g. new validator)
  • πŸ’‘ AI-augmented: prompts for scenario expansion, edge case inference
  • 🧠 Memory-aware: leverages past patterns, glossary terms, and prior test structures
  • 🧱 Aligned to moduleId, handlerName, skillId, traceId

πŸ“„ Example: GenerateHandlerTests

  • Input:

    • handlerName: CreateInvoiceHandler
    • inputDto: CreateInvoiceInput
    • events: InvoiceCreated, InvoiceApproved
    • roles: FinanceManager
  • Output:

    • CreateInvoiceHandlerTests.cs with:
      • Handle_ShouldSucceed_WithValidInput
      • Handle_ShouldFail_WhenCustomerIdIsMissing
      • Handle_ShouldEmit_InvoiceCreated_Event

πŸ“Š Skill Metadata Attached to Each Output

skill_id: GenerateHandlerTests
trace_id: invoice-2025-0143
module_id: InvoiceService
handler: CreateInvoiceHandler
output_path: Tests/InvoiceService.UnitTests/UseCases/CreateInvoiceHandlerTests.cs
status: success
generated_at: 2025-05-17T16:45:00Z

🧠 Specialized Sub-Skills

Sub-Skill Function
SuggestTestNames Uses prompt to name methods based on handler behavior
DetectMissingNegativeCases Validates validator logic vs. missing test cases
SuggestGivenWhenThenSteps Generates BDD step templates using OpenAI
InferErrorScenariosFromValidator Maps FluentValidation rules into test cases
CreateMockConfigurationStub Prepares fake services for integration test harness

πŸ“Œ Execution Control

All skills support:

  • dry_run mode: Emit plan, don’t write files
  • overwrite mode: Rewrite existing test if needed
  • diff_only mode: Emit suggestions without changing disk

βœ… Summary

The Test Case Generator Agent is powered by a robust, AI-augmented Semantic Kernel skill system that enables:

  • Modular and intelligent test generation
  • Coverage across unit, validator, controller, and BDD layers
  • Full traceability, observability, and retry control

These skills serve as the execution engine behind the entire test generation lifecycle.


πŸ§ͺ Supported Test Types

The Test Case Generator Agent produces a complete spectrum of test types, covering functional correctness, behavioral validation, integration flow, and user intent β€” all trace-aligned and structured for automation and review.

Each test type serves a different layer of the architecture and supports different personas (developers, QA agents, reviewers).


🧱 Test Categories by Layer

Layer Test Type Description
Domain/Application βœ… Unit Tests Validate IHandle<T> logic using in-memory fakes or mocks
Application βœ… Validator Tests Covers FluentValidation rules: nulls, ranges, formats
Service/API βœ… Integration Tests Simulate HTTP/gRPC endpoint calls and verify outcomes
UX & Use Case Level βœ… BDD / Scenario Tests .feature files with Given-When-Then descriptions
Security / Auth Layer βœ… Role-Based Tests Validates correct access per role and controller policy
Error Conditions βœ… Edge/Negative Tests Covers invalid input, missing required fields, business rule violations

πŸ“¦ Output Test Artifacts by Type

Type File Description
Unit Test CreateInvoiceHandlerTests.cs Tests Handle() logic using mocked repository and event bus
Validator Test CreateInvoiceValidatorTests.cs Confirms that RuleFor(x => x.Amount) works for edge cases
Controller Integration Test InvoiceControllerTests.cs Uses WebApplicationFactory or equivalent
BDD Feature File create_invoice.feature Human-readable test scenarios
Step Definitions CreateInvoiceSteps.cs Links Gherkin steps to assertions
Auth Test Inline in controller test or BDD step Verifies 401 and 403 outcomes for unauthorized users

πŸ§ͺ Example: Unit Test Coverage

[TestMethod]
public async Task Handle_ShouldFail_WhenAmountIsZero()
{
    var input = new CreateInvoiceInput { Amount = 0 };
    var result = await handler.Handle(input);
    Assert.IsFalse(result.IsSuccess);
    Assert.AreEqual("Amount must be greater than zero", result.Error.Message);
}

πŸ“˜ Example: Role-Based Access Test (Integration)

[TestMethod]
public async Task Post_ShouldReturn403_WhenUserIsNotFinanceManager()
{
    var client = factory.CreateClientWithRole("Guest");
    var response = await client.PostAsJsonAsync("/api/invoice", input);
    Assert.AreEqual(HttpStatusCode.Forbidden, response.StatusCode);
}

🧾 Example: BDD Scenario

Feature: Invoice creation

Scenario: Valid input
  Given a finance manager is authenticated
  When they create an invoice with amount 100
  Then the invoice is persisted
  And an "InvoiceCreated" event is emitted

Scenario: Unauthorized user
  Given a guest user
  When they create an invoice
  Then access is denied

🧠 Traceability per Test

All tests contain:

  • traceId (factory-level trace context)
  • handlerName, moduleId, blueprintId
  • rolesTested, validationRulesCovered
  • scenarioTitle (for BDD tests)

πŸ“ˆ Quality Enforcement

The agent ensures that:

  • All validators β†’ 100% rule coverage
  • All handlers β†’ at least one success and one failure test
  • All endpoints β†’ tested for happy path and auth errors
  • All .feature files β†’ linked to actual logic handlers

βœ… Summary

The Test Case Generator Agent produces a complete stack of tests, including:

  • βš™οΈ Unit
  • βœ… Validator
  • 🌐 Integration
  • πŸ§‘β€πŸ« BDD
  • πŸ” Authorization
  • ❌ Negative Scenarios

This enables the ConnectSoft platform to deliver fully validated, CI-ready, and QA-compatible microservices at scale.


πŸ“˜ Blueprint-Aware Test Generation

In ConnectSoft, a Blueprint is a structured, traceable, and semantically rich definition of:

  • Use case logic
  • Input/output contracts
  • Roles and authorization policies
  • Events emitted
  • Business rules and success/failure paths
  • Expected observability and behavior metadata

The Test Case Generator Agent uses blueprints as a primary alignment reference to ensure generated tests directly reflect what was declared at the planning and architecture stage.


🧩 How Blueprint Awareness Works

Component Usage in Test Generation
blueprint_id Added to all generated test metadata and filenames
use_case_name Determines test class and .feature file naming
input_dto Used to generate test input payloads
roles_allowed Triggers authorization tests and BDD scenarios
emits_events Verifies event emission in unit tests or steps
expected_failures Expands test set to include business rule violations
observability.expectedSpans Confirms presence of tracing/assertion calls

πŸ“„ Example Blueprint Input

blueprint_id: usecase-9241
use_case: CreateInvoice
input_dto: CreateInvoiceInput
roles_allowed: [FinanceManager]
emits_events: [InvoiceCreated]
expected_failures:
  - MissingCustomerId
  - ZeroAmount

β†’ From this, the agent will:

  • πŸ”¨ Generate CreateInvoiceHandlerTests.cs
  • ✏️ Write 3 test methods:
    • success
    • missing customer ID
    • zero amount
  • πŸ›‘ Generate 403 role test for unauthorized access
  • πŸ“˜ Emit create_invoice.feature with 2 scenarios
  • πŸ§ͺ Assert that InvoiceCreated event is emitted

🎯 Benefits of Blueprint Alignment

Without Blueprint Awareness With Blueprint Awareness
❌ Tests may miss edge cases βœ… Edge cases generated from blueprint's expected_failures
❌ No role coverage enforced βœ… roles_allowed triggers role tests
❌ Event emission not verified βœ… Unit tests assert correct domain events
❌ Manual BDD step writing needed βœ… .feature steps auto-aligned to blueprint

🧠 Blueprint Enriched Test Tags

Each test class and .feature file includes:

[TraceId("invoice-2025-0143")]
[BlueprintId("usecase-9241")]
[RolesCovered("FinanceManager")]
@blueprint_id:usecase-9241
@trace_id:invoice-2025-0143
Feature: Invoice creation

πŸ“˜ Sample Test Plan Emitted

handler: CreateInvoiceHandler
trace_id: invoice-2025-0143
blueprint_id: usecase-9241
test_cases:
  - type: unit
    scenario: success
  - type: unit
    scenario: MissingCustomerId
  - type: unit
    scenario: ZeroAmount
  - type: auth
    role: Guest
    outcome: Forbidden
  - type: event
    verifies: InvoiceCreated
  - type: bdd
    scenario: Valid invoice creation

βœ… Summary

The Test Case Generator Agent doesn’t guess β€” it uses blueprint-defined intent to:

  • πŸ“˜ Drive test naming and trace alignment
  • πŸ” Expand case coverage to match declared failures
  • πŸ” Enforce security test scenarios
  • πŸ§ͺ Validate side effects like events or logs
  • πŸ“œ Produce BDD aligned with user stories

This ensures traceability, accountability, and architectural compliance.


🧩 Edition-Specific Test Scaffolding

In ConnectSoft, an Edition is a customization layer that allows different feature sets, roles, rules, and behaviors for the same base application, tailored for:

  • πŸ₯ Different customers or verticals (e.g., Vet, Dental, Legal)
  • 🧾 Licensing tiers (e.g., Free, Pro, Enterprise)
  • 🌍 Localizations and region-specific behavior

The Test Case Generator Agent respects edition definitions and generates test scaffolds that reflect edition-specific behavior variations.


πŸ“˜ Example Use Case

Base Behavior:

use_case: CreateInvoice
roles_allowed: [FinanceManager]

Edition: lite

edition_id: lite
overrides:
  roles_allowed: [Accountant]
  emits_events: []
  validators:
    skip: [TaxRateValidation]

Edition: enterprise

edition_id: enterprise
overrides:
  roles_allowed: [FinanceManager, CFO]
  emits_events: [InvoiceCreated, InvoiceAuditLogged]
  validators:
    additional: [CurrencyCodeValidator]

πŸ” Agent Responsibilities for Editions

Edition-Aware Output Description
Role Tests Adds tests per edition-specific access role
Validator Tests Includes or skips rules depending on edition config
Event Assertion Tests Adjusts for emitted events per edition
.feature Tags Adds @edition tags to .feature scenarios
Conditional Logic Branches Flags edition-specific paths using conditional asserts
Edition Metadata Output Adds edition_id to test metadata and trace logs

πŸ“„ Generated Files for Editions

Tests/
β”œβ”€β”€ InvoiceService.UnitTests/
β”‚   └── Editions/
β”‚       β”œβ”€β”€ CreateInvoiceHandlerTests.lite.cs
β”‚       └── CreateInvoiceHandlerTests.enterprise.cs
β”œβ”€β”€ InvoiceService.Specs/
β”‚   └── Features/
β”‚       β”œβ”€β”€ create_invoice.lite.feature
β”‚       └── create_invoice.enterprise.feature
└── test-metadata.yaml

πŸ“˜ BDD Example

@edition:enterprise
Feature: Enterprise invoice creation

Scenario: CFO can create an invoice
  Given a CFO is authenticated
  When they submit an invoice
  Then the invoice is created
  And an "InvoiceAuditLogged" event is emitted

🧠 Test Metadata for Editions

tests:
  - handler: CreateInvoiceHandler
    edition: lite
    roles_tested: [Accountant]
    emitted_events: []
    validators_skipped: [TaxRateValidation]

  - handler: CreateInvoiceHandler
    edition: enterprise
    roles_tested: [FinanceManager, CFO]
    emitted_events: [InvoiceCreated, InvoiceAuditLogged]
    validators_added: [CurrencyCodeValidator]

πŸ” Edition Testing Scenarios

Scenario Example
βœ… Auth behavior changes Accountant allowed in lite, CFO required in enterprise
βœ… Feature toggle changes Logging enabled in enterprise only
βœ… Validation differences Tax rules skipped in lite, strict in enterprise

πŸ“¦ Studio and CI/CD Support

  • Editions are visually grouped in Studio dashboards
  • CI/CD pipelines can selectively run edition-specific tests
  • Coverage reports are edition-scoped
  • QA Engineers can drill down by editionId for test review

βœ… Summary

Edition-specific scaffolding ensures the Test Case Generator Agent supports per-edition compliance and coverage, including:

  • πŸ§ͺ Feature toggling
  • πŸ” Role redefinition
  • 🧠 Custom validation logic
  • πŸ“ˆ Scenario expansion and trace separation

This ensures ConnectSoft SaaS outputs are test-covered across all edition permutations.


⚠️ Negative & Edge Case Generation

Negative and edge cases are the most frequent source of:

  • 🧨 Runtime bugs
  • πŸ” Security vulnerabilities
  • πŸ“‰ Business logic failures
  • ❌ Regression gaps in CI pipelines

The Test Case Generator Agent plays a critical role in preventing these defects early by systematically generating structured, validated negative and boundary tests based on:

  • DTO structures
  • FluentValidation rules
  • Business rules from the blueprint
  • Authorization scenarios
  • Null, zero, empty, or malformed input conditions

🧠 How the Agent Infers Negative Cases

Source Inferred Edge Cases
RuleFor(x => x.TotalAmount).GreaterThan(0) β†’ TotalAmount = 0, TotalAmount = -1
RuleFor(x => x.CustomerId).NotEmpty() β†’ CustomerId = Guid.Empty
RuleFor(x => x.Currency).Length(3) β†’ Currency = "", "US", "USDX"
roles_allowed: [FinanceManager] β†’ Try with Guest, Unauthenticated, Manager
required_fields: [date, location] β†’ Missing these fields in input
enum field: PaymentStatus β†’ Invalid string/number input ("Invalid", 999)

🧾 Example: Validator-Based Negative Test

Validation Rule:

RuleFor(x => x.TotalAmount).GreaterThan(0);

Generated Test:

[TestMethod]
public void Validate_ShouldFail_WhenAmountIsZero()
{
    var input = new CreateInvoiceInput { TotalAmount = 0 };
    var result = validator.Validate(input);
    Assert.IsFalse(result.IsValid);
    Assert.AreEqual("TotalAmount must be greater than 0", result.Errors[0].ErrorMessage);
}

πŸ“˜ Example: Auth Rejection

[TestMethod]
public async Task Post_ShouldReturn403_WhenUserIsUnauthorized()
{
    var client = factory.CreateClientWithRole("Guest");
    var response = await client.PostAsJsonAsync("/api/invoice", validPayload);
    Assert.AreEqual(HttpStatusCode.Forbidden, response.StatusCode);
}

πŸ” Other Edge Case Types

Category Test Examples
Null Fields DTO field = null
Boundary Values 0, -1, 1, int.MaxValue
Missing Required JSON input missing required field
Invalid Enum "InvalidValue" for enum-type fields
Unauthorized Roles Request by a user lacking required role
Incorrect Formats Dates in wrong format, currency codes too long
Conflict Scenarios e.g., trying to create an invoice that already exists

πŸ“ Output Summary

Generated test methods are named and tagged clearly:

[TestMethod]
public void Validate_ShouldFail_WhenCustomerIdIsMissing() { ... }

[TestMethod]
public void Handle_ShouldFail_WhenAmountIsZero() { ... }

[TestMethod]
public async Task Post_ShouldReturn400_WhenPayloadIsMalformed() { ... }

Included in:

  • βœ… Unit tests for handler and validator
  • βœ… Integration tests for controller input
  • βœ… .feature files under Scenario: Invalid input
  • βœ… Test metadata: test-metadata.yaml

πŸ“ˆ QA/CI/CD Impact

Output Use
negative_cases: 3 Appears in test metadata
test_coverage.json Tracks % of rules with negative test
CI pipelines Fail if critical validator has no edge case test

βœ… Summary

The Test Case Generator Agent guarantees robust error handling coverage by:

  • 🧠 Inferring negative conditions from DTOs and validation rules
  • πŸ“˜ Generating matching tests and .feature scenarios
  • πŸ“¦ Structuring output for CI/CD, Studio, QA Agents
  • πŸ”„ Supporting retry and memory for missed cases

This ensures every ConnectSoft output is not just correct under ideal conditions β€” but resilient under edge and failure paths.


🀝 Collaborative Role in the Agent Ecosystem

The Test Case Generator Agent is a keystone agent in the Engineering Cluster, tightly integrated with other agents responsible for:

  • πŸ—οΈ Generating features and services
  • πŸ§ͺ Validating correctness
  • πŸ› Investigating and resolving bugs
  • πŸ“€ Preparing pull requests for review
  • βœ… Ensuring test coverage and regression safety

Its outputs are consumed, validated, enriched, and extended by these other agents.


πŸ” Agent Collaboration Matrix

Collaborating Agent Interaction
Backend Developer Agent Provides handlers, validators, controller endpoints that trigger test generation
Frontend Developer Agent May consume BDD .feature files or test endpoints
QA Engineer Agent Uses test metadata, .feature files, and test coverage summaries for manual + automated testing
Bug Resolver Agent Uses test artifacts to reproduce and fix regressions; updates test metadata with bug references
Pull Request Creator Agent Annotates PRs with test coverage reports, new test file references, and status of .feature scenarios

πŸ” Collaboration Flows

πŸ§ͺ QA Engineer Agent

sequenceDiagram
    participant TestGen
    participant QAAgent

    TestGen->>QAAgent: Emit test-metadata.yaml
    TestGen->>QAAgent: Emit .feature file
    QAAgent->>TestGen: Request retry or correction (if gaps found)
    QAAgent-->>Studio: Display test grid and coverage
Hold "Alt" / "Option" to enable pan & zoom

Used for:

  • πŸ“„ QA scenario mapping
  • πŸ“‹ Regression checklist auto-generation
  • πŸ“Š Visual test coverage dashboards in Studio

πŸ› Bug Resolver Agent

sequenceDiagram
    participant TestGen
    participant BugResolver

    BugResolver->>TestGen: Lookup test for handler X
    TestGen->>Memory: Fetch test trace, result
    BugResolver->>TestGen: Extend test for bug 4287
    TestGen->>Memory: Store corrected test and bug ref
Hold "Alt" / "Option" to enable pan & zoom

Used for:

  • Reproducing failing logic
  • Adding permanent regression tests
  • Logging bug_id tags in test metadata

πŸ“€ Pull Request Creator Agent

Contribution Example
Test Summary β€œβœ”οΈ 3 new tests generated for CreateInvoiceHandler”
Coverage Annotation β€œπŸ“ˆ 92.4% test coverage for InvoiceService”
Studio Link Direct link to trace ID and .feature preview
Inline Comments β€œπŸ” Missing role test for Guest user in CancelInvoiceHandlerTests.cs”

🧱 File Contracts Used by Other Agents

File Used By Purpose
test-metadata.yaml QA Agent, PR Agent Trace β†’ Test mapping
.feature QA Agent Manual test scenario export
*.cs test files Bug Resolver Agent Extend or debug logic
test-coverage.json PR Agent, Studio Visual dashboard
execution-metadata.json All Retry and diagnostics

🧩 Multi-Agent Handshake Example

β†’ Backend Developer finishes CreateInvoiceHandler
β†’ Triggers TestCaseGeneratorAgent
β†’ QA Engineer Agent uses generated `.feature`
β†’ Bug Resolver uses test during bug fix
β†’ PR Agent lists tests in pull request comments

🧠 Tagging and Traceability

All outputs include:

agent_id: test-case-generator-agent
trace_id: invoice-2025-0143
blueprint_id: usecase-9241
bug_id: null
used_by: [qa-engineer-agent, bug-resolver-agent, pull-request-creator-agent]

Studio and memory system use this for indexing, linking, and trace replay.


βœ… Summary

The Test Case Generator Agent is not a siloed tool β€” it is a cross-agent, cross-discipline quality orchestrator, directly supporting:

  • πŸ§ͺ QA testing
  • πŸ› Bug tracing
  • βœ… CI/CD validation
  • πŸ“€ Pull request intelligence

This guarantees full test lifecycle collaboration and traceability from generation to deployment.


πŸ” Retry and Correction Loops

In a dynamic, multi-agent pipeline, initial test generation may result in:

  • ❌ Missing scenarios (due to late blueprint updates)
  • ❌ Structural inconsistencies (e.g. renamed DTOs, missing validators)
  • ❌ Coverage gaps or low-quality assertions
  • ❌ Edition-specific overrides arriving post-generation

The Test Case Generator Agent includes robust retry and correction logic, ensuring that test artifacts remain accurate, traceable, and complete over time β€” even if other parts of the system evolve.


🧬 Correction & Retry Lifecycle

sequenceDiagram
    participant TestAgent
    participant ValidatorAgent
    participant Studio
    participant Memory

    ValidatorAgent->>TestAgent: Emit i18nValidationFailed
    TestAgent->>Memory: Lookup last test trace
    TestAgent->>TestAgent: Retry test skill with patch mode
    alt Retry fails
        TestAgent->>Studio: Emit RetryRequested + HumanCorrectionNeeded
    else Retry succeeds
        TestAgent->>Memory: Store patched test
    end
Hold "Alt" / "Option" to enable pan & zoom

🧠 Retry Scenarios

Scenario Correction
🧱 DTO field renamed Re-scan and update method parameters, test input
πŸ“‰ Test missing for validator Generate missing ValidatorTests.cs
πŸ” Handler updated with new failure case Add new test method with same traceId
πŸ” New role added to controller Generate new authorization test
🧾 Feature file desync Rebuild .feature from blueprint using dry_run mode

πŸ“˜ Retry Modes

Mode Behavior
Auto-retry Detects coverage issues and regenerates test artifacts without human input
Patch mode Adds missing test methods to existing test classes (does not delete)
Human-in-the-loop Studio emits retry event with curated correction details
Edition retry Reprocesses test generation for newly added edition variant
CI retry Re-triggered via TestCoverageCheckFailed in pipeline

🧩 Artifact Metadata with Retry Info

trace_id: payments-2025-0183
handler: CancelInvoiceHandler
retry_count: 2
last_retry_reason: "Missing role test for 'CFO'"
correction_status: success
corrected_by: test-case-generator-agent
original_generated_at: 2025-05-14T12:00:00Z
last_updated_at: 2025-05-17T15:34:00Z

🧠 Memory-Driven Retry Logic

Agent uses memory to:

  • Locate *.cs test files by handler/module/edition
  • Identify missing test cases based on previous metadata
  • Compare validator rules to test method coverage
  • Merge patch logic into existing file without overwriting human edits
  • Emit retry status and observability logs

πŸ” Retry Observability

Metric Purpose
test.retry.count Number of automatic retries triggered
test.retry.failure Count of retries needing manual intervention
test.retry.success_rate % of automated retries that resolved the issue
test.patch.applied Tracks number of test files corrected post-generation

πŸ’¬ Studio Correction Example

requested_by: qa-engineer-agent
reason: "Missing negative test for 'ZeroAmount'"
action: Add test to CreateInvoiceHandlerTests.cs
status: success
resolved_by: test-case-generator-agent

β†’ Displayed in Studio test preview as an auto-applied patch with trace reference


βœ… Summary

The Test Case Generator Agent provides resilient, intelligent, and trace-aligned correction flows through:

  • πŸ” Smart retries for handlers, controllers, editions, and features
  • πŸ“˜ Patch mode for targeted augmentation of existing test files
  • 🧠 Studio + human hooks for QA-guided corrections
  • πŸ“Š Full observability and retry tracking

This guarantees that tests evolve in sync with the codebase and blueprint lifecycle.


πŸ§ͺ Validation, Linting, and Structural Consistency

After generating tests, the agent must ensure that the output is:

  • βœ… Executable (compiles and passes syntax checks)
  • βœ… Traceable (linked to handler, blueprint, and trace ID)
  • βœ… Complete (covers required success, failure, edge, and role paths)
  • βœ… Consistent (naming, structure, conventions match platform standards)
  • 🧠 Aligned with factory trace and edition logic

Without this phase, silent defects (e.g., missing tests, unlinked trace, invalid Gherkin syntax) could propagate to QA, CI, or PR review stages.


βœ… Validation Scope

Validation Area What It Checks
Traceability Each file includes traceId, blueprintId, handlerName
Test Class Naming Follows pattern: [UseCase]HandlerTests.cs
Method Naming PascalCase, starts with verb (Should_, Handle_, Validate_)
Coverage Structure At least one success, one failure, one edge, and one role-based test per handler
Validator Alignment All RuleFor entries have a matching test case
.feature Syntax Valid Gherkin structure and step completeness
Empty or Placeholder Tests Flags generated tests that lack assertions
Conflicts with Existing Files Prevents overwriting manually added test methods (unless allowed)
Retry Compliance Ensures correction patches resolved the original issue

🧬 Example: Validation Report Output

{
  "trace_id": "payments-2025-0183",
  "handler": "CancelInvoiceHandler",
  "validation": {
    "unit_test_file": "CancelInvoiceHandlerTests.cs",
    "status": "Passed",
    "methods": 5,
    "missing_cases": [],
    "structure": "Valid",
    "trace_annotations": true,
    "assertions_per_method": {
      "min": 1,
      "avg": 2.4
    }
  },
  "feature_file": {
    "status": "Passed",
    "syntax": "Valid",
    "scenarios": 3
  }
}

πŸ“„ Gherkin Validation Example

Scenario: Valid Gherkin

Scenario: Create a valid invoice
  Given a finance manager is authenticated
  When they submit an invoice with amount 100
  Then the invoice is persisted

❌ Invalid Gherkin

Scenario Create a valid invoice
Given finance manager
When submit
Then success

β†’ Rejected with error: InvalidScenarioFormat: Missing keyword structure


🧰 Linting Checks

Area Example
Test Class Naming ❌ InvoiceTests.cs β†’ βœ… CreateInvoiceHandlerTests.cs
Method Signature ❌ Test1() β†’ βœ… Handle_ShouldReturnSuccess_WhenValidInputProvided()
FluentAssertions Enforce result.Should().BeSuccess(); vs raw Assert.IsTrue()
Redundant Code Detect unused mocks or var input = null;
Ordering Arrange β†’ Act β†’ Assert order enforced in unit tests
Snapshot Cleanliness Markdown snapshot must not exceed X length or miss link to feature/handler

πŸ“¦ Validation Artifacts Produced

File Description
validation-summary.json Overall test validation status per module and handler
test-lint-report.txt Developer-readable style errors
coverage-check.yaml Per trace ID breakdown of test types present
StudioFeedback.json Status markers to render green/yellow/red indicators in Studio dashboards

πŸ“Š Metrics for Validation Health

Metric Description
test.validation.pass_rate % of handlers with valid tests
test.structure.errors.count Count of class/method structure violations
feature.syntax.failures Number of .feature files rejected for Gherkin errors
assertion.per.method.avg Should be β‰₯ 1.2 on average
trace.missing.rate Should be < 5%

βœ… Summary

The Test Case Generator Agent performs deep structural, semantic, and trace validation of all test artifacts, ensuring:

  • πŸ’Ύ Executable test code
  • 🧠 Traceable test assets
  • πŸ“˜ Readable and reusable .feature files
  • πŸ“Š Clean reporting for CI, Studio, and QA workflows

With validation and linting complete, test outputs are safe, predictable, and production-grade.


πŸ“Ž Test Metadata, Tags, and Traceability

In ConnectSoft’s AI Software Factory, traceability is foundational to maintaining:

  • πŸ’‘ End-to-end visibility from blueprint to behavior
  • βœ… QA validation workflows
  • πŸ“€ PR reviews with auto-linked tests
  • πŸ“Š Coverage dashboards in Studio
  • πŸ” Retry and correction mechanisms
  • 🧠 Reusability of previously generated test assets

The Test Case Generator Agent ensures every test artifact is deeply trace-tagged and indexed across memory, file system, and CI layers.


🧩 Core Metadata Attached to All Tests

Tag Example
trace_id invoice-2025-0143
blueprint_id usecase-9241
handler_name CreateInvoiceHandler
dto_name CreateInvoiceInput
module_id InvoiceService
agent_id test-case-generator-agent
edition_id enterprise (if applicable)
roles_tested [FinanceManager, Guest]
test_type unit, validator, integration, bdd
feature_file create_invoice.feature
generated_at ISO timestamp
retry_count 0 (or more, if regenerated)
corrected_by qa-engineer-agent (if manually patched)

🧾 Example: test-metadata.yaml

trace_id: invoice-2025-0143
blueprint_id: usecase-9241
module: InvoiceService
edition: enterprise
generated_by: test-case-generator-agent
test_cases:
  - handler: CreateInvoiceHandler
    test_class: CreateInvoiceHandlerTests.cs
    test_type: unit
    roles_tested: [FinanceManager]
    coverage:
      success: true
      edge_cases: 2
      auth: true
      validator: true
  - controller: InvoiceController
    test_class: InvoiceControllerTests.cs
    test_type: integration
  - bdd_file: create_invoice.feature
    test_type: bdd
    scenarios: 3

🧬 Metadata Injection in Test Files

βœ… Unit Test Class

[TestClass]
[TraceId("invoice-2025-0143")]
[BlueprintId("usecase-9241")]
[Edition("enterprise")]
public class CreateInvoiceHandlerTests { ... }

βœ… BDD .feature File

@trace_id:invoice-2025-0143
@blueprint_id:usecase-9241
@edition:enterprise
Feature: Create Invoice

πŸ“ Where Metadata Is Stored

File Purpose
test-metadata.yaml Primary index for all test artifacts in the module
execution-metadata.json Semantic Kernel trace metadata with skill execution and retry info
StudioPreview.md Human-readable test summaries with links and tags
coverage-check.yaml Simplified file for CI coverage validation
observability-events.jsonl Span logs tagged with handler and blueprint references

🧠 Memory & Searchability

The metadata is indexed into:

  • 🧠 ConnectSoft Memory DB
  • πŸ“˜ Studio Trace View
  • πŸ“Š Test coverage dashboards
  • πŸ“€ Pull Request annotations
  • πŸ§ͺ Bug reproduction lookup (for Bug Resolver Agent)

β†’ Enabling queries like:

"Find all tests for blueprint `usecase-9241` in edition `lite` with missing auth coverage"

πŸ“€ Output Contracts with Other Agents

Agent Uses Tags For
QA Engineer Agent Filtering .feature files by role, handler, edition
Bug Resolver Agent Finding affected tests by trace ID or blueprint
Pull Request Creator Agent Annotating PR with trace-aligned test artifacts
Studio Rendering trace β†’ handler β†’ test β†’ scenario relationships
Test Coverage Validator Scanning missing test types by test_type and handler_name

βœ… Summary

The Test Case Generator Agent provides rich, trace-anchored metadata for every artifact, enabling:

  • 🎯 Full test traceability
  • πŸ” Retry and update mapping
  • πŸ“Š Coverage validation and QA inspection
  • πŸ“ Structured search and reuse
  • 🀝 Multi-agent interoperability and Studio UI integration

This metadata fabric is the backbone of ConnectSoft’s observability and quality assurance layer.


🧠 Skills for Scenario Simulation and Prompting

Not all test cases can be extracted statically from code. The Test Case Generator Agent uses Semantic Kernel + OpenAI-powered skills to:

  • πŸ’‘ Simulate user flows and business rule paths
  • πŸ“˜ Expand functional and BDD scenarios based on intent
  • πŸ§ͺ Infer edge conditions not explicitly described
  • πŸ” Suggest test case names and outcomes
  • ✍️ Generate realistic test inputs and response expectations

These skills allow the agent to think like a human QA engineer, covering both happy path and unhappy path scenarios β€” with trace alignment.


🧩 Key Prompt-Based Scenario Skills

Skill Name Purpose
SimulateUseCaseBehavior Analyze blueprint, DTO, and handler to generate natural language test paths
ExpandScenarioVariants Convert a simple use case into multiple functional variations
GenerateGivenWhenThen Emit Gherkin steps from blueprint + business logic
SuggestTestMethodNames Turn scenarios into readable, expressive test method names
ProposeEdgeCasesFromRules Given validation rules, suggest test inputs for boundary conditions
InferBusinessRuleAssertions Suggest expected outcomes based on domain rules
GenerateInputSamplesFromDTO Emit example JSON or C# input payloads
PromptTestDescriptions Add markdown-based summaries or comments to test classes

πŸ’¬ Prompt Template Example

You are generating test cases for the use case: CreateInvoice.

DTO:
- TotalAmount: decimal, required, > 0
- Currency: string, 3 letters
- CustomerId: Guid, required

Business Rule:
- Invoice must emit `InvoiceCreated` event on success
- Unauthorized users should receive HTTP 403

Generate:
- Test method names
- Gherkin scenarios
- Edge conditions

πŸ§ͺ Example Prompt Result

Suggested Unit Test Methods

  • Handle_ShouldReturnSuccess_WhenInputIsValid
  • Handle_ShouldFail_WhenAmountIsZero
  • Handle_ShouldFail_WhenCustomerIdIsMissing

Suggested .feature Scenarios

Scenario: Submit a valid invoice
  Given a finance manager
  When they submit an invoice with amount 100 and currency USD
  Then an invoice is saved and event "InvoiceCreated" is emitted

Scenario: Missing amount
  Given a finance manager
  When they submit an invoice with amount 0
  Then the request is rejected

🧠 Role-Aware Scenario Generation

When blueprint contains:

roles_allowed: [FinanceManager]

The agent prompts:

β€œSuggest alternative scenarios where unauthorized roles (e.g., Guest, Analyst) try to access the endpoint.”

β†’ Generates:

  • Scenario: Unauthorized access attempt
  • TestMethod: Post_ShouldReturn403_WhenUserIsGuest

πŸ“„ Input-to-Scenario Mapping (AI-Augmented)

public class CreateInvoiceInput {
  public Guid CustomerId { get; set; }
  public decimal TotalAmount { get; set; }
  public string Currency { get; set; }
}

β†’ Prompt skill outputs:

[
  { "field": "TotalAmount", "edge_cases": [0, -1, null] },
  { "field": "Currency", "edge_cases": ["", "US", "USDX"] },
  { "field": "CustomerId", "edge_cases": ["Guid.Empty", "null"] }
]

Used to auto-generate validator tests.


πŸ“œ Markdown Scenario Snapshots (Studio Preview)

### Test: CreateInvoice - Edge Case Summary

- βœ… Valid invoice β€” emits `InvoiceCreated`
- ❌ Zero amount β€” rejected by validator
- ❌ Guest user β€” receives 403 Forbidden
- ❌ Invalid currency β€” rejected

βœ… Summary

Through prompt-driven scenario simulation, the agent:

  • πŸ” Covers realistic and inferred paths
  • 🧠 Adapts to business logic even when implicit
  • πŸ§ͺ Tests from a user and system perspective
  • 🧾 Outputs traceable, clear, and complete scenario maps

These skills are critical for high-quality BDD and test generation beyond static analysis.


🧠 Knowledge Base – Patterns, Examples, and Embeddings

To generate realistic, consistent, and context-aware tests, the Test Case Generator Agent relies on an internal Knowledge Base that includes:

  • πŸ“˜ Domain-specific patterns (e.g. invoices, payments, appointments)
  • πŸ§ͺ Testing idioms, assertion best practices, mocking strategies
  • πŸ” Previously generated test assets
  • πŸ”  Glossary terms for events, validations, error formats
  • 🧠 Embeddings of trace-aligned handler/test relationships

This enables the agent to reuse patterns, avoid duplication, and enforce test behavior consistency across microservices, modules, and editions.


🧩 Core Knowledge Components

Knowledge Type Example
Test Naming Patterns Handle_ShouldReturnSuccess_WhenValidInput()
BDD Scenario Templates Scenario: Unauthorized Access β†’ Given user is guest
Common Validator Rules RuleFor(x => x.Amount).GreaterThan(0) β†’ failure test case
Mocking Patterns Repository mocked with .Setup(x => x.Add(...))
Assertion Conventions Use of Assert.True, result.Should().BeSuccess(), or FluentAssertions
Domain Behaviors InvoiceCreated β†’ implies test must assert event publication
Edge Case Templates 0, null, Guid.Empty, "InvalidValue" for enums
Previous Test Metadata Trace-aligned test classes and their mappings

🧠 Example: Internal Pattern Storage

test_naming:
  - "{HandlerName}_ShouldReturn{Outcome}_When{Condition}"
  - "Validate_ShouldFail_When{InvalidField}"
mock_templates:
  - "var mock = new Mock<I{DependencyName}>();"
assertions:
  - "result.Should().BeSuccess();"
  - "response.StatusCode.Should().Be(HttpStatusCode.BadRequest);"

πŸ“˜ Scenario Embedding: Use Case β†’ Gherkin

From prior projects, the agent learns:

  • "CancelAppointment" β†’ scenario involves time constraint checks
  • "CapturePayment" β†’ always includes currency validation and fraud rule
  • "CreateInvoice" β†’ requires event assertion for InvoiceCreated

β†’ This embedding is reused to autofill scenario skeletons and predict validations or assertions.


πŸ§ͺ Reuse Across Modules

Scenario Pattern Reuse
BookAppointmentHandler β†’ VetClinicService 🧠 Reuse from DentalClinicService.BookAppointmentHandlerTests.cs
PayInvoiceHandler β†’ BillingService Reuse from PayOrderHandlerTests.cs in ECommerceService

πŸ” Knowledge from Memory System

The agent queries memory for:

  • Matching DTO structure (CreateInvoiceInput)
  • Similar blueprint IDs (usecase-9xxx in same domain)
  • Matching emitted events (InvoiceCreated)
  • Matching validation rules (GreaterThan, NotEmpty)
  • Matching authorization roles or controller routes

β†’ Enables prefetching of:

  • .feature structure
  • Edge case suggestions
  • Assertion phrases
  • Trace tags to carry forward

πŸ“„ Snapshot: Prior Example Lookup

{
  "handler": "CreateInvoiceHandler",
  "coverage": {
    "unit": true,
    "validator": true,
    "feature_file": "create_invoice.feature",
    "edge_cases": ["ZeroAmount", "MissingCustomerId"]
  },
  "related_handlers": ["CreateOrderHandler", "CreateRefundHandler"]
}

🧠 Use in Prompt Priming

The knowledge base is used to prime Semantic Kernel prompts, injecting:

  • πŸ” Prior scenarios for similar blueprints
  • 🧱 Domain-specific test idioms
  • πŸ§ͺ Validator β†’ test mappings
  • πŸ“˜ Scenario structure seeds for .feature generation

πŸ“Š Studio + Memory Visibility

All generated knowledge artifacts are:

  • 🧠 Stored and retrievable by other agents
  • πŸ” Indexed by handler name, test file, DTO structure, or emitted event
  • πŸ“Ž Linkable in Studio for trace reviews and glossary-aligned coverage maps

βœ… Summary

The Test Case Generator Agent is backed by a rich, evolving knowledge base that drives:

  • πŸ’‘ Pattern reuse
  • πŸ” Test standardization
  • πŸ“˜ Consistent Gherkin phrasing
  • πŸ” Cross-service and cross-edition alignment
  • πŸ€– Faster, smarter, and more human-like test generation

πŸ“‘ Observability Hooks: Spans, Logs, and QA Indexing

The ConnectSoft AI Software Factory is observability-first. Every action performed by the Test Case Generator Agent must be:

  • πŸ” Traceable by trace ID, handler, and edition
  • πŸ§ͺ Validated for test coverage and structure correctness
  • πŸ“Š Monitored in CI/CD, Studio dashboards, and PRs
  • πŸ“ Logged for QA, retries, and diagnostics
  • βœ… Integrated with test coverage gates and QA agent workflows

πŸ“ˆ OpenTelemetry Spans (OTel)

Each skill execution emits an OTel span, enriched with tags:

Span Name Tags
testgen.GenerateHandlerTests handler, trace_id, module_id, blueprint_id, skill_id
testgen.GenerateValidatorTests dto, rules_count, coverage_percent
testgen.GenerateFeatureFile scenario_count, roles_tested, edition
testgen.ValidateGeneratedTests status, lint_issues, retry_required
testgen.EmitTestMetadataFile test_count, output_path, edition

βœ… All spans include duration_ms, status_code, and retry_count.


🧾 Structured Logs

Log Type Content
Success Log Test case generated for CancelInvoiceHandler [trace: invoice-2025-0143]
Failure Log Missing validator test for Amount field β†’ Retry queued
Warning Log Method name does not follow convention: Test1()
Retry Log Retry #2 completed for CreateInvoiceHandlerTests.cs, added auth scenario
QA Tag Log Marked InvoiceControllerTests.cs as traceable to blueprint usecase-9241

Format: .jsonl (JSON Lines) for downstream ingestion.


πŸ“¦ Emitted Observability Artifacts

File Description
execution-metadata.json Tracks all skills executed, start/end times, retry counts
observability-events.jsonl Span-level logs for trace viewer or test dashboard
validation-summary.json Lint results, structure compliance, scenario coverage
test-coverage-summary.yaml Module β†’ handler β†’ test type coverage grid
qa-report.md Markdown summary for QA agent and human review in Studio

πŸ“Š Studio Dashboard Hooks

The following insights are visualized per trace:

View Source
βœ… Handler-to-Test Mapping test-metadata.yaml
βœ… Test Coverage by Type test-coverage-summary.yaml
βœ… Scenario Count & Roles .feature file span metadata
⚠️ Failed Validation Issues validation-summary.json
πŸ” Retry History execution-metadata.json & observability-events.jsonl

πŸ“Ž Integration with CI/CD and QA Agents

Consumer Observability Use
QA Engineer Agent Consumes .feature tags, coverage YAML, logs failure scenarios
Pull Request Creator Agent Uses span summaries to annotate PR with test details
Test Coverage Validator Agent Validates percent complete per handler/test type
Studio Trace Viewer Renders spans per skill execution in chronological and semantic order

πŸ“˜ Sample QA Report (Markdown)

### Test Summary: CreateInvoiceHandler (Trace ID: invoice-2025-0143)

- βœ… Unit Tests: 3 (Success, ZeroAmount, MissingCustomerId)
- βœ… Validator Tests: 2
- βœ… Auth Scenario: FinanceManager, Guest (403)
- βœ… .feature Scenarios: 3
- πŸ“Ž Blueprint: usecase-9241 | Edition: enterprise
- πŸ” Retry Count: 1 (Auth test added on second run)

⚠️ Issues: None

βœ… Summary

The Test Case Generator Agent is fully instrumented with:

  • ⏱️ Real-time spans for every skill
  • 🧾 Structured logs for trace replays and diagnostics
  • πŸ“Š Test coverage summaries for QA, CI, and Studio
  • πŸ” Retry metadata for lifecycle visibility
  • πŸ“˜ Markdown snapshots for human-in-the-loop review

This observability layer guarantees test artifacts are never a black box β€” they are always inspectable, auditable, and QA-integrated.


βœ… Final Summary and Conclusion

The Test Case Generator Agent is a core engineering automation agent in the ConnectSoft AI Software Factory. Its primary responsibility is to:

Autonomously generate unit, validator, integration, BDD, and edge-case tests for every use case, controller, port, and validator β€” ensuring complete, traceable, and CI-ready test coverage aligned with blueprints, editions, and tenant-specific rules.


🧭 Strategic Position in the Factory

flowchart TD
    VisionAgent --> Blueprint
    Blueprint --> MicroserviceGenerator
    MicroserviceGenerator --> BackendDeveloper
    BackendDeveloper --> TestCaseGeneratorAgent
    TestCaseGeneratorAgent --> TestArtifacts
    TestCaseGeneratorAgent --> QAEngineerAgent
    TestCaseGeneratorAgent --> PullRequestCreatorAgent
    TestCaseGeneratorAgent --> RetryOrCorrectionFlow
Hold "Alt" / "Option" to enable pan & zoom

βœ… Trigger Point: Post-handler/controller generation βœ… Consumers: QA Agent, PR Agent, Bug Resolver Agent, CI/CD validator βœ… Outputs: Complete test scaffolds with full metadata and observability


🧩 Key Functional Capabilities

Capability Description
πŸ“¦ Test Generation Unit, Validator, Integration, BDD .feature, Steps, Edge/Negative
πŸ“˜ Blueprint-Aware Aligns to blueprint_id, trace_id, and declared rules/roles/events
πŸ” Edition-Aware Generates edition-specific test variants and conditional flows
πŸ§ͺ Validator Mapping Ensures every RuleFor(...) has corresponding test method
🧠 Scenario Simulation AI-augmented prompting for realistic, role-based Gherkin scenarios
πŸ” Retry & Correction Patch-mode regeneration with trace-backed update tracking
πŸ“Ž Traceable Artifacts Includes test metadata, coverage reports, Studio-compatible annotations
πŸ“Š Observability Emits spans, logs, and Markdown previews with full QA/PR integration
🧠 Knowledge Reuse Ingests patterns and examples across modules and domains

πŸ“˜ Output Artifacts Recap

Type File/Format
Unit Tests HandlerTests.cs
Validator Tests ValidatorTests.cs
Integration Tests ControllerTests.cs
BDD Scenarios .feature + Steps.cs
Metadata test-metadata.yaml, execution-metadata.json
QA Reports qa-report.md, test-coverage-summary.yaml
Observability observability-events.jsonl, OTel spans

🀝 Agent Collaboration

Agent Interaction
QA Engineer Agent Provides test feedback, consumes .feature files
Bug Resolver Agent Reuses test cases for reproducing defects
PR Creator Agent Publishes trace-aligned test coverage in PR comments
Test Coverage Validator Confirms coverage thresholds and test quality
Studio UI Renders trace-test relationships and scenario metadata

🧠 Final Takeaway

The Test Case Generator Agent is the quality assurance brain of the factory, delivering:

  • πŸ”„ Continuous test generation
  • πŸ“˜ Blueprint-consistent behavior
  • 🧱 Structural enforcement and observability
  • 🧠 Contextual intelligence for deep scenario testing
  • πŸ’Ό Support for rapid onboarding, QA, and audit compliance

🏭 Every ConnectSoft-generated microservice is test-covered, role-aligned, edition-specific, and CI-ready β€” by default β€” thanks to this agent.