The Control Problem in AI Development Tools

January 17, 2026 (updated January 17, 2026)

Exploring how the Viable System Model illuminates tensions between human developers and AI coding agents

The Control Problem in AI Development Tools

I spotted an interesting observation on Bluesky today about agentic coding tools and the Viable System Model (VSM). The poster noted that people who resist AI coding tools are essentially implementing layers S3-S5 of the VSM while the coding agent handles S1-S2. This framing reveals something deeper about the control tensions in human-AI collaboration.

The Five Levels of Control

Stafford Beer's Viable System Model describes five subsystems needed for any viable organization:

  • S1: Implementation - Direct operational activities
  • S2: Coordination - Managing conflicts between S1 units
  • S3: Control - Optimizing internal operations
  • S4: Intelligence - Environmental scanning and strategy
  • S5: Policy - Balancing present and future needs

The insight is that AI coding tools naturally excel at S1-S2: implementing specific functions and coordinating between code modules. But humans instinctively want to maintain S3-S5: control over architecture, strategic direction, and policy decisions about the codebase.

The Real Tension

This isn't about humans fearing replacement. It's about control boundaries. When developers resist AI tools, they're often protecting their role as the control system of the software development process.

The problem emerges when AI tools try to operate at S3-S5 levels - making architectural decisions, choosing libraries, or determining overall system design. This creates what Beer would call a "variety mismatch" where the AI system attempts to absorb complexity that should remain with the human controller.

Designing Better Collaboration

Understanding this through VSM suggests better ways to design human-AI coding workflows:

  1. Clear Level Separation: AI handles S1-S2 (implementation, coordination), humans retain S3-S5 (control, intelligence, policy)

  2. Explicit Interfaces: Define clean boundaries between what the AI decides autonomously and what requires human approval

  3. Feedback Loops: Implement proper recursive channels so lower-level AI decisions can be audited and adjusted by higher-level human control

The most successful AI coding setups I've observed follow this pattern implicitly. The AI excels at generating boilerplate, implementing well-defined functions, and maintaining consistency across modules. But the human developer maintains architectural vision, makes trade-off decisions, and sets quality standards.

Beyond Coding Tools

This VSM perspective applies more broadly to AI systems in organizational contexts. The key is recognizing that viable systems require hierarchical control - not everything can or should be automated at the same level.

When we design AI tools that respect these natural control boundaries, we get powerful augmentation. When we ignore them, we get resistance, brittleness, and eventual system failure.

The goal isn't to eliminate human control, but to amplify it by having AI handle the right operational complexity while preserving human agency where it matters most.

References

  1. Brain of the Firm — Stafford Beer
  2. The Viable System Model — Wikipedia