Risk Without Visibility
Correlation, propagation, speed, and the limits of intervention.
{
"system_property": "emergent_behaviour",
"cause": "many_independent_actors_shared_mechanism",
"system_output": "unpredictable_from_parts",
"can_be": ["beneficial", "ruinous"],
"actor_awareness": "local_only"
} Markets have always generated emergent behaviour.
This is not a failure of markets. It is an inherent property of any system in which many independent actors, each pursuing their own interests, interact through a shared mechanism. The behaviour of the system as a whole cannot be read off from the behaviour of its parts. The collective produces outcomes that no individual designed, that no authority intended, and that no observer could have predicted with confidence from first principles.
In each historical case — the South Sea Bubble, the Panic of 1907, the Flash Crash of 2010 — the actors involved were doing approximately what they were supposed to do. What they could not do was see the system they were part of at the level where the dangerous dynamics were occurring.
This is the fourth dynamic of the agentic economy. Not a future risk to be anticipated but a structural property to be understood.
Correlation Without Coordination
{
"mechanism": "correlation",
"cause": ["shared_training", "shared_objectives"],
"coordination": false,
"response_to_same_conditions": "converges",
"human_diversity_equivalent": "absent",
"effect": "concentrated_simultaneous_action"
} The first mechanism is correlation.
When many agents share similar training, similar objectives, and similar operating environments, they will tend to respond to similar conditions in similar ways — not because they are coordinating, but because they are similar. The similarity is emergent, not designed.
Agent systems can lack this diversity. When a significant fraction of participants are running on architectures trained on overlapping datasets, optimising for similar objectives, using similar decision-making processes, the variation that distributes human responses is absent. The same conditions produce the same outputs, at the same moment, from every agent that shares the relevant properties.
Model weights are only one layer of the shared substrate. Agents built on the same underlying models also tend to use the same tool providers, the same retrieval systems, the same orchestration infrastructure, the same default policies, and the same market signals. The correlated behaviour emerges from the aggregate similarity of the entire decision stack.
The homogeneity that makes agents predictable and reliable in individual deployments is the same homogeneity that makes their aggregate behaviour unpredictable and potentially destabilising.
Propagation Through Dependency Chains
{
"mechanism": "propagation",
"cause": "interconnected_dependency_chains",
"failure_type": "degraded_input",
"each_agent_status": "functioning_correctly",
"failure_attributable_to": null,
"failure_source": "interaction_structure"
} The second mechanism is propagation.
Agents do not typically operate in isolation. They exist within networks of dependency — receiving inputs from other agents, sending outputs that become other agents’ inputs, calling services that are themselves agent-mediated, participating in chains of interaction that extend far beyond any single principal’s visibility.
When an agent receives a degraded input, it may still function correctly by its own internal standards. It processes what it receives according to its design and produces an output. But that output, derived from a degraded input, is itself degraded. If that output becomes another agent’s input, the degradation propagates. Each agent in the chain does exactly what it was designed to do. The failure travels anyway.
What makes this particularly difficult to govern is that the failure is not attributable to any single agent. It is a pattern distributed across the whole network. In an economy mediated by agents at scale, these dependency chains are not exceptional. They are the normal substrate of economic activity.
Speed and the Threshold of Intervention
{
"mechanism": "speed",
"governance_assumption": "gap_between_event_and_response",
"human_cycle_time": "seconds_to_hours",
"agent_cycle_time": "milliseconds",
"intervention_threshold": "agent_velocity_dependent",
"above_threshold": "governance_is_always_retrospective"
} The third mechanism is speed.
Every governance mechanism that exists for managing systemic risk assumes some gap between the triggering event and the response. Circuit breakers halt trading when prices move too fast. Regulatory review identifies patterns of harmful behaviour. Each mechanism requires time — time to observe, time to deliberate, time to act.
Agents that perceive and respond in milliseconds can complete entire cycles of action before any human observer has registered that the triggering event occurred. For any class of risk that propagates through agent interaction, there exists a threshold of agent velocity above which human intervention cannot be preventive. Below that threshold, governance can shape outcomes in real time. Above it, governance is always retrospective.
As agent deployment scales, more and more economic activity will occur above that threshold. The space in which real-time human oversight is possible will shrink. The space in which governance must work through system design rather than direct intervention will expand.
The Limits of Monitoring
{
"monitoring_tools": [
"transaction_logs",
"anomaly_detection",
"agent_level_observability",
"system_level_metrics"
],
"each_tool_constraint": "downstream_of_architecture",
"logs_tell_you": "what_happened_not_why",
"monitoring_effect": "reduces_frequency_and_severity",
"monitoring_eliminates": false
} The natural response to systemic risk is to build better monitoring — to instrument the ecosystem so that dangerous patterns become visible before they cascade.
The tools available are meaningful. Transaction logs provide a forensic record. Anomaly detection can identify statistical deviations in near real time. Agent-level observability provides earlier warning than observing only outcomes.
But each encounters the same underlying constraint. The fundamental constraint is not data availability. It is that observability is downstream of architecture.
No monitoring layer can restore real-time comprehensibility once correlation, propagation, and speed have crossed the thresholds at which emergent failure becomes possible.
Visibility as a Design Problem
{
"visibility_treatment": "design_problem",
"not": "operational_add_on",
"correlation_source": "homogeneous_architectures",
"propagation_source": "dependency_chains",
"speed_source": "machine_execution",
"governance_approach": "built_into_system_not_applied_from_outside"
} The deeper implication is that visibility in the agentic economy cannot be treated as an operational concern — something to add after the systems are built. It is a design problem.
The three mechanisms described here — correlation, propagation, and speed — are not incidental properties of agent systems. They are structural consequences of how those systems are built. These properties are not bugs. They are the same properties that make agent systems capable and efficient. The governance question is not how to eliminate them but how to design systems that contain their consequences.
These are engineering questions and governance questions simultaneously. In previous infrastructure transitions, the governance frameworks that eventually managed systemic risk were built after the systems were already in place, often in response to crises that made the risks undeniable. The agentic economy is being built now.
It is the kind of question whose answer, once given by events, cannot easily be revised.
This is the fourth of four dynamics in the Agentic Economy series.
Next: The Hierarchy Unanchored — Synthesis
Continue The Inquiry
Keep moving through the series, or follow the work as the argument develops in public.