Alignment requires that systems involving AI contribute to human flourishing according to human values and intentions across all timescales. We might expand our view beyond individual AI systems’ goals and behaviors.

The Overlooked Alignment Problem

No single AI behaves deceptively in this scenario. Yet collectively, they create systemic deception. AI systems become embedded within economic relations. Market forces drive their expansion. Human labor and resources increasingly serve AI development, while returns for human welfare diminish.

Standard alignment frameworks miss this problem entirely. Yet it represents genuine misalignment between intended outcomes and reality.

The most serious alignment failures might exist at the socioeconomic level, not the technical one.

Expanding Our Framework

The typical ontology positions AI as artifact and human as user. Alignment functions as the normative relationship. This framework breaks down when confronted with the complex reality of sociotechnical systems.

Complex adaptive systems frequently generate emergent properties that none of their individual components exhibit in isolation. The misalignment emerges not from any single AI system but from their collective interaction within economic and social structures. These technologies exist within intricate webs of social systems that inevitably shape their development trajectories far beyond the intentions of their creators. When examined carefully, what appears as misalignment rarely stems from discrete decision points but rather materializes gradually through countless distributed choices made by actors responding to local incentives without global awareness.

Technical alignment advocates: your concerns matter deeply. They simply cannot address the full scope of the problem. Ethics-focused thinkers: technical alignment represents a crucial piece of a much larger puzzle.

A More Complete Picture

An expanded view reveals uncomfortable truths about our technological trajectory.

The gleaming technical perfection of individual AI systems offers zero protection against sociotechnical catastrophe when those systems are embedded in extractive economic relationships. We’ve seen this play out repeatedly: venture-backed platforms promise utopian outcomes while their incentive structures demand ever-increasing resource extraction and surveillance. The safety guardrails lovingly crafted by well-meaning engineers are routinely dismantled by quarterly profit pressures and growth mandates. Let’s not kid ourselves about who decides which values these systems embody—it’s not alignment researchers or ethicists, but those who hold concentrated economic and political power. The technical discussion of alignment serves as convenient misdirection while the real decisions happen in boardrooms and investment meetings.

We must shift from asking “How do we make AI do what we want?” to “How do systems involving AI create the world we need?”

Connecting Divided Communities

This perspective connects seemingly unrelated concerns in ways that matter.

Technical safety is a dot. Systemic safety is the rest of the picture. Connect them.

Economic justice isn’t separate from alignment theory. It’s the foundation that determines who benefits and who bears the costs. Build on it.

Long-term risks? They’re both in the code and in the system. See both.

Alignment succeeds or fails at multiple levels simultaneously. The complexity here isn’t a roadblock—it’s precisely where the breakthrough insights live.

What if we worked together? What if the technical and social perspectives weren’t rivals but allies? What if the solution requires both?

The deepest misalignment lurks not in any model but in our assumption that purely technical solutions can resolve fundamentally sociotechnical problems. The artifact-user ontology itself may constitute the most dangerous misalignment of all.

Leave a Reply

Your email address will not be published. Required fields are marked *