The Three Ways Humans Work With AI, And Why It Matters

The Three Ways Humans Work With AI, And Why It Matters - Professional coverage

According to Fortune, a new field experiment with 244 consultants using GPT-4, conducted with scholars from Harvard, MIT, Wharton, and Warwick Business School, analyzed nearly 5,000 interactions to map how humans actually work with AI. The research found professionals naturally sorted into three distinct collaboration styles: Cyborgs (60%), who engage in continuous, iterative dialogue with AI; Centaurs (14%), who use AI selectively while keeping firm control; and Self-Automators (27%), who delegate entire workflows to AI with minimal critical engagement. The study evaluated outputs on accuracy and persuasiveness, finding Centaurs achieved the highest accuracy, while both Cyborgs and Centaurs excelled in persuasiveness. Most consequentially, the research shows these styles lead to dramatically different skill development outcomes, with Self-Automators experiencing “no skilling,” potentially hollowing out organizational expertise.

Special Offer Banner

You’re Probably One of These Three People

Here’s the thing that’s so fascinating about this research. Everyone had the same tool and the same task. No one was told *how* to use the AI. They just… figured it out. And what they figured out wasn’t uniform at all—it broke cleanly into three camps that feel instantly recognizable.

You’ve got the Cyborgs, blending human and machine thinking in a real-time back-and-forth. They’re the power users, breaking tasks down, assigning AI personas, and constantly pushing back. Then there are the Centaurs, the strategic delegators. They’re like a manager giving very specific assignments to a junior staffer (the AI) and then critically reviewing the work. And finally, the Self-Automators. They basically hand the whole problem to the AI and say, “You do it,” making only light edits to the final output. It’s fast, it looks polished, but it’s fundamentally passive.

The Hidden Cost of Convenience

Now, the performance differences are interesting. Centaurs were the most accurate. Cyborgs and Centaurs made more persuasive arguments. But the real kicker is what happens to the *human’s* skills over time. This is where executives really need to pay attention.

Cyborgs develop new AI-specific expertise (“newskilling”). Centaurs deepen their traditional domain knowledge (“upskilling”). But Self-Automators? They get “no skilling.” Zero. They trade short-term productivity for long-term professional stagnation. Think about that. Over a quarter of these highly-trained consultants, knowing they were being evaluated, still fell into the Self-Automator trap. That’s a powerful seduction. It means companies could be accidentally automating the expertise right out of their people, leaving them with a workforce that can’t think critically about the very tools they’re using.

Why “Oversight” Isn’t Enough

So what’s the big takeaway? It’s that the common executive mantra of “keep a human in the loop” is basically useless. It’s not one thing. It’s three fundamentally different things with three different outcomes. Telling your team to “use AI with oversight” is like telling them to “drive safely” without specifying if they’re in a school zone or on a racetrack.

The strategy has to match the task. Need high-stakes accuracy on a financial model? You probably want to encourage Centaur behavior—firm human control. Need to brainstorm creative marketing campaigns? Maybe a Cyborg’s iterative style is better. The Self-Automator approach? Honestly, it should be reserved for truly routine, low-risk stuff where skill development doesn’t matter. The full study, available on SSRN, dives deeper into this framework.

And companies need to measure more than just the final output. If you only track whether AI work was “accepted,” a Self-Automator’s unedited draft and a Cyborg’s heavily iterated masterpiece look the same in the data. You have to understand the *quality* of the interaction throughout the process.

Building the Right Kind of Expertise

This all points to a massive training and cultural challenge. You can’t just roll out a ChatGPT license and hope for the best. You have to deliberately build both domain expertise *and* AI fluency. Cyborg-style training builds advanced prompting and critical engagement skills. Centaur-style training reinforces judgment and analytical control.

Basically, we’re at a crossroads. AI can be a tool that sharpens human capability, or it can be a crutch that atrophies it. The organizations that win won’t just be the ones with the best AI. They’ll be the ones who best understand the human dynamics of using it. They’ll know that the real work isn’t just implementing the technology—it’s implementing the right kind of collaboration.

Leave a Reply

Your email address will not be published. Required fields are marked *