May 15, 2026
ManyPress
Artificial Intelligence

After Orthogonality: Virtue-Ethical Agency and AI Alignment

This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals.

NF

ManyPress Editorial Team

ManyPress Editorial

Feb 18, 2026 · 11:25 PM3 min readSource: The Gradient
After Orthogonality: Virtue-Ethical Agency and AI Alignment

The Core Finding

This is not an isolated incident. What The Gradient documented fits a pattern — one that has grown harder to dismiss as coincidence or exception.

This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals.. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices [1] : networks of actions, action-dispositions, action-evaluation criteria, and action-resources that structure, clarify, develop, and promote themselves.. If we want AIs that can genuinely support, collaborate with, or even comply with human agency, AI agents’ deliberations must share a “type signature” with the practices-based logic we use to reflect and act..

How It Got Here

I argue that these issues matter not just for aligning AI to grand ethical ideals like human flourishing, but also for aligning AI to core safety-properties like transparency, helpfulness, harmlessness, or corrigibility.. Concepts like ’harmlessness’ or ‘corrigibility’ are unnatural -- brittle, unstable, arbitrary -- for agents who’d interpret them in terms of goals or rules, but natural for agents who’d interpret them as dynamics in networks of actions, action-dispositions, action-evaluation criteria, and action-resources.. While the issues this essay tackles tend to sprawl, one theme that reappears over and over is the relevance of the formula ‘promote x x -ingly.’ I argue that this formula captures something important about both meaningful human life-activity (art is the artistic promotion of art, romance is the romantic promotion of romance) and real human morality (to care about kindness is to promote kindness kindly, to care about honesty is to promote honesty honestly).

Who Pays the Price

Not all parties to this story face the same outcome. The immediate consequences fall unevenly — some actors are positioned to absorb the shock, others are not. Following the incentive structures reveals why this story landed when it did, and why certain responses were inevitable.

The institutional players involved have interests that do not always align with those of ordinary people in the ai space. That gap is part of why developments like this one keep recurring.

What the Experts Say

Context matters here. The ai landscape has shifted substantially over the past several years, driven by a combination of structural forces that predate any single event or decision.

The trajectory has been visible to those tracking the data closely. What The Gradient documented is not an anomaly — it is a data point in a longer arc.

The Road Ahead

Several outcomes now become more likely as a result of what has unfolded. The variables are not all knowable, but the range of plausible scenarios has narrowed.

Key questions remain open: the pace of any response, the willingness of relevant actors to change course, and whether the underlying conditions will shift or hold. The answers will become clearer in the weeks ahead.

Originally reported by The Gradient.

AdvertisementAd Placeholder — Configure AdSense in .env.localNEXT_PUBLIC_ADSENSE_CLIENT=ca-pub-XXXXXXXX

This article was independently rewritten by ManyPress editorial AI from reporting originally published by The Gradient.

Artificial Intelligence