Thursday, March 5, 2026
HomeHealthDon’t Name It 'Intelligence' - The Atlantic

Don’t Name It ‘Intelligence’ – The Atlantic

I’m sometimes requested by schools to offer a model of a chat on how I turned a author.  The simple factor to do is to offer a kind of guided tour by way of the woods of literary self-formation: a string of anecdotes designed to elicit a number of chuckles, a second or two of reflection in regards to the inevitable bends within the highway, issues that felt momentous however turned out to not matter, or issues that didn’t appear vital on the time however with hindsight turned out to be a very powerful of all.

Sometimes, these excursions finish in the identical place: The writer has discovered a path by way of the wilderness, and found a voice alongside the best way. Voice is what leads us out of the woods.

The difficulty, no less than for me, is that this sort of speech is usually fiction; the trail is just a path on reflection. Telling the story this fashion elides, smooths over, and underestimates the function of circumstance and dumb luck. Most of what a author experiences is failure. Growing a voice takes years. The purpose is to not make it out of the woods shortly or unscathed. Getting misplaced just isn’t the tough half. It’s the entire thing.

Now alongside comes AI, purporting to be our GPS by way of the woods. Not simply any information: tireless, fearless, is aware of all of the shortcuts. AI obviates the necessity to enter the woods within the first place. Why face the clean web page and the blinking cursor? Why wrestle to grasp what you imply and learn how to articulate it? Why hearken to your individual croaky, warbly voice when you may push the button for fluid, facile, polished language, out there anytime, on any topic? Voice on demand.

Once I communicate to high-school and school college students (together with my very own kids), I fear that on the time when they need to be growing their very own voices, they’re being informed they don’t must trouble. AI writes for us, reads for us, thinks for us. It replaces our voice with its personal.

Besides that AI doesn’t have a voice. It’s lip-syncing ours. It’s a mean, a remix. Initially, the massive language fashions had no elements aside from our human language. With out the pure voice, there may by no means have been a man-made one. But when we turn into content material to substitute AI-generated language for our personal, we find yourself in a closed loop by which the identical outputs are recycled again as inputs.

What I concern is that we’re shedding the power to inform the distinction between our voice and the machines’. Or worse, shedding the need to argue that there’s one.

And it’s an argument. Those that are essentially the most bullish on machine studying argue that synthetic common intelligence, or AGI—synthetic intelligence fashions that match or surpass human cognitive capabilities on any job—is imminent, simply two or three years away. Some say 10 years, or extra. It’s a rolling goal, at all times simply over the horizon. However no matter timeline, the concept is that every one of our “cognitive work” will quickly be automated. They consider that is doable as a result of they consider that the language we produce is fungible with that generated by LLMs.

I’m not desirous about predictions or timelines, or in who is correct or fallacious and by how a lot. I’m no AI skilled, nor am I even an AI newbie. I’m not a neuroscientist or a cognitive scientist or any sort of scientist in any respect. What I’m is a mother or father of youngsters, a human, a reader, and a author, in roughly that order. What I’m fighting, like many others, is how to consider AI, and what it means for work, faculty, and life—and learn how to discuss all of that with my kids (who absolutely have far more perception into AI than I do).

What I’m most desirous about is the “I” in AGI. What does it truly imply? And why have we let a small variety of rich businesspeople outline it?

Sam Altman, the CEO of OpenAI, promised that partaking with Chat GPT-5 could be like speaking “to a reputable Ph.D.-level skilled in something.” I can’t cease occupied with how revealing—and peculiar—that definition of intelligence is.

Don’t get me fallacious. It’s unimaginable that we’re even having this dialog. I don’t need to reduce the space the know-how has traveled, the pace with which it has carried out so, or how far it’d nonetheless go. What I do need to do is ask a query: How can we create intelligence once we don’t totally perceive—can’t even actually outline—what intelligence is?

Again to Altman’s formulation: Common intelligence means being a Ph.D.-level skilled in something. Such experience is little doubt spectacular, and definitely associated to, or perhaps a part of, intelligence, nevertheless outlined. But it surely’s just one small a part of intelligence. My alma mater, UC Berkeley, provides doctoral packages in 94 fields of examine. Presumably AGI will cowl all of these.

However the achievement of a level doesn’t cowl, doesn’t even purport to the touch, emotional intelligence. What’s a Ph.D. in studying the room? In instructing your child to trip a motorcycle? In crying since you had been moved by a bit of music? We contemplate elephants clever as a result of they mourn their useless. What’s a Ph.D. in grief, awe, surprise, curiosity?

Maybe nobody needs to be stunned that a few of the world’s greatest scientists and engineers have outlined intelligence the best way they’ve. Even when the AGI champions’ motives had been fully altruistic, they’d nonetheless be biased by their very own means of seeing the world, by their very own experiences and successes. Researchers on the forefront of AI are among the many most sensible and achieved minds on Earth—and so they make up a really slim, self-selected group of individuals primed to grasp sure varieties of information higher than others: express, well-defined, tokenizable information; information that types the premise of our most far-reaching, wildly correct theories of the universe; information that has allowed us to create world-changing applied sciences. However that’s solely a small subset of all information—the sliver that may be expressed symbolically, as language or arithmetic.

The remaining is what the thinker Michael Polanyi referred to as “tacit information,” which makes up a a lot bigger quantity of knowledge, and interacts in lots of extra methods. His philosophy of information may be summed up by: “We all know greater than we are able to say.”

Is that a part of AGI? I don’t consider so. I received’t consider it till ChatGPT texts me a hyperlink to a video that made it snort or cry or rethink its opinions on that factor we had been speaking in regards to the final time we spoke.

Till it does, I’d argue that the “I” these engineers are chasing is a proxy—or perhaps a misnomer. It’s nothing like intelligence as we perceive it.

You may say this argument is flawed, primarily based on an anthropocentric view of intelligence. Possibly we have now to let go of preconceptions and embrace the concept that machine intelligence can—and maybe should—be radically completely different from human intelligence. Possibly machine intelligence doesn’t require sentience, or autonomy, or curiosity, or feeling.

Say I concede all that. What I’m arguing is that, regardless of the machines can do—as unimaginable and helpful and doubtlessly economically precious as their capabilities could also be—none of it deserves the phrase intelligence.

A few outliers apart, even essentially the most enthusiastic proponents of AGI don’t consider that the frontier AI fashions are able to feeling. That means they have to assume that intelligence may be decoupled from embodiment and emotion. They’re saying: We perceive what intelligence is, in its distilled and remoted type.

To which I’d say: Please share that definition with the remainder of us.

In the event that they’re proper, we’ll know quickly sufficient.

But when they’re fallacious, the relentless pursuit of AGI poses actual dangers: to social coverage, to training, to our energy grid, to the financial system, to the atmosphere. Already, generative AI seems like provide looking for demand. The necessity to scale up, plus the ever-present stress to hunt increased charges of return, have mixed to create a mind-boggling motion of capital and societal sources into one business. Generative AI is the tech equal of high-fructose corn syrup: a presumably helpful ingredient that’s now being inserted into a lot of what we eat, with out our consent.

However maybe simply as vital are the potential harms to our personal self-conception, each as people and as a species.

AI will proceed to enhance. It’d change the world; arguably, it already has. However for now—and maybe at all times—it’s no substitute for the human voice.

Voice is what we use to speak with each other. Voice is the sound we make as we navigate the unknown—our echolocation, mapping the world, making an attempt to put ourselves inside it. Voice encodes expertise, loss, ache, pleasure. We don’t purchase voice regardless of failure, however by way of it. Due to it.

AI doesn’t have a voice, and it’s not speaking with us. Probably not. It solutions our questions. That’s what it was constructed to do. It’s a solution machine. However we’re query machines. Questions are important to intelligence. With out them, we’re static, stagnant. With out them, we don’t evolve. We are able to study solutions, however solely by asking questions. Questions are how we recursively self-improve. We people are always prompting each other in endlessly artistic methods. We immediate. We reply. Our solutions turn into new prompts. Our context home windows are our lifetimes; our tokens are uncountable.

That is about greater than semantics. By calling what AI can do “intelligence,” we’re conflating a technological functionality with a human attribute. We’re dumbing ourselves down—not by speaking to AI however by measuring ourselves in opposition to it. The hazard isn’t that we’re overestimating AI. It’s that we’re underestimating ourselves.


This essay was tailored from Charles Yu’s 2026 Joel Connaroe Lecture, given at Davidson Faculty on February 10.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments