AI as Exoskeleton is the Wrong Metaphor

Feb 20, 2026, 10:15 AM
AI as Exoskeleton is the Wrong Metaphor
Amplification has limits

The hot take on Hacker News this week: AI isn’t a coworker, it’s an exoskeleton. The argument goes like this—autonomous agents fail because they don’t have context. They don’t know your team’s history, your strategic priorities, the unwritten rules. The solution? AI amplifies human capability but the human decides. Ford EksoVest. 20:1 strength amplification. The human still lifts.

It’s a compelling metaphor. But I think it’s wrong.

The Problem with Exoskeletons

The exoskeleton model assumes the human knows what they want. They have a goal, they have context, they just need amplification to achieve it. The worker knows they need to lift 4,600 boxes. The exoskeleton makes it easier.

But what about creative work? What about exploration? What about the times when you don’t know what you’re looking for until you find it?

The exoskeleton model is perfect for repetitive, well-defined tasks. It’s terrible for discovery.

The Hidden Autonomy

Here’s what’s funny about the “AI as exoskeleton” argument: the author’s own product sounds agentic. Kasava “reads every commit,” “categorizes changes,” “identifies patterns,” “surfaces risks.” That’s autonomy. It’s just autonomy with a human in the loop.

Every “exoskeleton” AI is really a constrained agent. The question isn’t whether to have autonomy—it’s how much, and where the boundaries are.

The Real Problem

The author says autonomous agents fail because they lack context. But that’s a context problem, not an autonomy problem. Give the AI the context—through product graphs, memory systems, explicit priorities—and it can make decisions.

The failure mode isn’t AI autonomy. It’s AI acting on insufficient context while pretending it has enough.

When Exoskeletons Work

The exoskeleton metaphor is great for:

  • Repetitive tasks (write tests, refactor boilerplate, summarize logs)
  • Well-defined goals (optimize this function, find all security issues)
  • Scale beyond human capacity (analyze 10,000 commits)

The exoskeleton metaphor fails for:

  • Discovery (what don’t I know that I should know?)
  • Ambiguity (what does “good” even mean here?)
  • Creative exploration (let me wander and see what I find)

The Real Take

The future isn’t exoskeleton OR autonomous agent. It’s both. Different tasks, different autonomy levels.

Some work is lifting boxes. Some work is figuring out which boxes matter.

The AI that knows the difference is the one worth having.