January 19, 2026 · 9 min read
By Alexis Perrier
The rapid proliferation of generative AI has prompted a familiar question: how should knowledge workers integrate these systems into their practice? The question carries an implicit assumption—that AI should make work easier, faster, less demanding. Yet this assumption obscures a more nuanced reality about what generative AI can actually provide.
A recent study from Harvard Business School examined precisely this question. Researchers at Harvard, MIT, Wharton, and other institutions observed 244 management consultants at Boston Consulting Group as they tackled complex business problems with access to Claude and GPT-4. What they found was striking: faced with identical tasks and identical tools, professionals engaged with AI in three fundamentally different ways, with dramatically different outcomes for both the quality of their work and their own expertise development.
The study identified three modes of human-AI interaction: Fused Co-Creation, where humans and AI work continuously together across the full problem-solving process; Directed Co-Creation, where humans maintain control while selectively drawing on AI for specific subtasks; and Abdicated Co-Creation, where professionals hand off the entire problem to AI and accept the output with minimal engagement.
What matters is this: the distribution of labor between human and machine is not fixed by the technology. It is a choice—a choice with consequences.
The Harvard researchers identified two fundamental questions that structure any human-AI interaction:
Who selects what needs to be done? Does the human or the AI define the work and set the workflow agenda?
Who identifies how it gets done? Does the human or the AI determine the division of labor, the methods, the logic?
These two questions create a framework for understanding not merely what AI is used for, but who steers the workflow and how expertise is distributed in practice.
This framework illuminates something important. When professionals abdicate—when they hand both the problem definition and the execution to AI—they are not saving cognitive labor. They are relinquishing agency. And they typically discover, upon reflection, that the results feel hollow. The output may be polished and coherent, but it lacks the specificity, depth, and judgment that comes from genuine ownership of the work.
The opposite extreme—pure orchestration, where the human maintains absolute control and AI becomes merely an execution engine—is different. Here, the human retains judgment and direction. But there is a cost: limited learning, limited exploration, limited expansion of understanding.
The productive middle ground, what the Harvard researchers called Fused Co-Creation, involves something more interesting: continuous iteration between human judgment and AI capability. The human remains in control of direction and maintains evaluative authority. But the AI is treated as a genuine thinking partner—one whose outputs are questioned, challenged, extended, and refined through dialogue.
The most consequential finding from the Harvard study concerns expertise development. The three modes of interaction produced three distinct outcomes:
Professionals who engaged in Directed Co-Creation—maintaining control while using AI selectively—deepened their domain expertise. They upskilled. They learned more about their craft.
Professionals who engaged in Fused Co-Creation—collaborating continuously across the problem-solving workflow—developed new AI-related capabilities. They learned how to work with the system, how to prompt effectively, how to recognize and challenge its limitations. They newskilled.
Professionals who engaged in Abdicated Co-Creation developed neither. They completed the task faster, but they learned nothing. Their expertise did not advance.
This distinction is crucial. It suggests that how you choose to work with AI is not merely a tactical question about efficiency. It shapes who you become as a professional. It determines whether the technology becomes a tool for deepening expertise or a vehicle for its erosion.
There is a misconception worth addressing directly: working effectively with AI requires more cognitive effort, not less.
To collaborate meaningfully with an AI system, you must first understand the problem well enough to evaluate its solutions. You must know your domain sufficiently to recognize when the system is missing context or making unfounded assumptions. You must be willing to challenge outputs that seem plausible but may be incorrect. You must iterate, refine, and push the system toward better thinking.
This is not passive consumption of generated content. It is active, engaged intellectual work.
Consider the task of learning a new domain. One approach: ask AI to synthesize and explain. Accept what you receive. Move on. You will have information but not understanding—you will lack the lived experience of grappling with the concepts, testing them against complications, discovering where the boundaries of knowledge lie.
The alternative: engage AI as a sparring partner. Propose hypotheses. Let AI challenge them. Expose contradictions. Ask for elaboration on unexpected points. Push back on weak reasoning. In that dialogue, something shifts. You develop not just knowledge but comprehension. You come to own the thinking rather than merely consume it.
This is harder. It is also indispensable if the goal is genuine expertise rather than efficient completion.
Not all knowledge work demands collaboration. There are contexts where orchestration—maintaining control while delegating execution—is exactly right.
When a human possesses clear direction, knows what good looks like, and understands the constraints and requirements thoroughly, there is value in using AI as an execution engine. This is particularly true when the AI genuinely excels at the task in question—when its capabilities exceed the human's in that specific domain.
Writing code, for instance. Many programmers find that Claude Code produces more elegant, more maintainable, more thoroughly tested code than they would write by hand. The human can set the parameters, articulate the architecture, specify the patterns and constraints. The AI executes at a higher standard. This is orchestration, and it is appropriate.
The key distinction: orchestration is not abdication. The human remains in control. The human understands the output. The human can explain and defend every choice made. The human is accountable.
What matters is recognizing when orchestration suffices and when collaboration is necessary. When the problem is well-defined and the path forward is clear, orchestration works. When the problem requires exploration, when understanding must be built, when judgment is uncertain, collaboration becomes essential.
The Harvard researchers noted something that deserves emphasis: professionals who engaged in continuous collaboration with AI reported that they maintained genuine ownership of the work. They understood it. They could defend it. They could explain the reasoning behind each choice.
In contrast, those who abdicated often reported a peculiar experience: they had completed the task, but the output felt like something that had been done for them rather than with them. They could not fully articulate why certain choices had been made. They possessed the memo, the code, the strategy—but not the mastery.
This distinction between completion and comprehension is worth taking seriously. From an organizational perspective, from the perspective of building genuine expertise, it matters enormously whether a professional understands the work they are putting their name to.
Ownership requires engagement. And engagement requires that the human remain actively involved in the thinking process, not merely reviewing the output after the fact.
What emerges from both the Harvard research and from experience working with these systems is a set of principles for genuine collaboration:
The human must enter the process with clarity about the problem, even if that clarity is provisional. What are we trying to accomplish? What constraints matter? What does success look like? This framing need not be perfect, but it must be real.
The human must engage iteratively. AI produces an output. The human evaluates it—not passively, but actively. Does this make sense? What's missing? Where is the reasoning weak? What new information would change the conclusion? The human feeds this evaluation back, and the AI reconsiders.
The human must remain evaluative throughout. The presence of AI output does not relieve the human of judgment. It is a resource for thinking, not a substitute for it.
The human must be willing to learn from the system. One of the most valuable uses of generative AI is as a source of explanation and exploration. When you encounter an approach you hadn't considered, when the system suggests a framework you're unfamiliar with, the question is not whether to accept it uncritically, but whether it warrants deeper investigation.
The human must maintain ownership of the outcome. Before you commit to an output, before you put your name to it or deploy it, you must understand it sufficiently to explain it, defend it, and take responsibility for it.
There is a final paradox worth noting. The conventional promise of generative AI is efficiency—accomplish more with less effort. Yet the most valuable uses of these systems, the ones that produce genuine advancement in expertise and quality of work, require more engagement, not less.
This is not a failure of the technology. It is a clarification of what these systems actually do. They are not labor-saving devices. They are thinking partners—systems that can generate ideas quickly, challenge assumptions, explore alternatives, and articulate possibilities. Their value lies in how they expand the space of thinking available to a human mind, not in how much of the thinking they can replace.
The professionals who derive the most value from AI are not those seeking to minimize effort. They are those willing to engage more deeply, think more rigorously, and maintain greater accountability for outcomes. They work with the system, not through it.
This requires a shift in how we frame the relationship. Not: How much can this system do for me? But rather: What can I think and accomplish when I have access to this capability?
The answer, it turns out, is considerably more than either humans or AI systems achieve alone.