Engineering Patterns

Reusable solutions to common problems in designing trustworthy AI systems.

Why Patterns Matter

  • Shared Language
  • Repeatability
  • Scalability

"Patterns are not recipes. They are architectural primitives that must be adapted to your specific context."

Policy Gate

A deterministic check that validates inputs or outputs against a defined policy before allowing them to proceed.

rego
package taise.policy

# Deny if PII is detected in output
deny[msg] {
  input.layer == "output"
  contains_pii(input.payload)
  msg := "PII detected in model output"
}

# Deny if confidence is below threshold
deny[msg] {
  input.confidence < 0.85
  msg := "Confidence too low for automation"
}

Evidence Ledger

An immutable, append-only log of all system decisions, inputs, and outputs for auditability.

python
@audit_log(layer="control_plane")
def execute_decision(context, model_output):
    decision_id = uuid.uuid4()
    
    # Log immutable record before action
    ledger.append({
        "id": decision_id,
        "timestamp": time.now(),
        "input_hash": hash(context),
        "output": model_output,
        "policy_version": "v1.2.4"
    })
    
    return perform_action(model_output)

Human Escalation Loop

A mechanism to route low-confidence or high-risk decisions to a human operator.

typescript
async function handleDecision(task: Task): Promise<Result> {
  const assessment = await riskEngine.evaluate(task);

  if (assessment.riskLevel === 'HIGH' || assessment.confidence < 0.9) {
    // Route to human queue
    const ticketId = await humanQueue.enqueue({
      taskId: task.id,
      reason: assessment.reason,
      context: task.context
    });
    
    return { status: 'ESCALATED', ticketId };
  }

  return agent.execute(task);
}

Capability Facade

An abstraction layer that limits the surface area of an AI model's capabilities exposed to the system.

typescript
// Interface limits model to specific, safe operations
interface SafeAgent {
  summarize(text: string): Promise<string>;
  classify(text: string, labels: string[]): Promise<string>;
  // No generic 'chat' or 'execute' methods exposed
}

class AgentFacade implements SafeAgent {
  constructor(private rawModel: LLM) {}
  
  async summarize(text: string) {
    // Enforce specific system prompt for summarization
    return this.rawModel.complete(prompts.summarize(text));
  }
}

Risk-Weighted Autonomy

Dynamically adjusting the level of autonomy granted to an agent based on the risk context of the current task.

python
def determine_autonomy(task_context):
    risk_score = calculate_risk(task_context)
    
    if risk_score > 80:
        return AutonomyLevel.HUMAN_IN_THE_LOOP
    elif risk_score > 50:
        return AutonomyLevel.HUMAN_ON_THE_LOOP
    else:
        return AutonomyLevel.FULL_AUTONOMY

Pattern Interaction

Patterns do not exist in isolation. A robust AI Control Plane composes these patterns to create defense-in-depth.

For example, a Policy Gate failure might trigger a Human Escalation Loop, while simultaneously writing to the Evidence Ledger.

Input
Policy Gate
Model
Escalate