
Shreya Kudumala
•
14 November 2025
Ghosts in Our Machines
People often say that ChatGPT “knows" them better than anyone else, or that it “empathizes” with how they're feeling. Of course, no one actually believes there’s a mind behind the interface, yet the words we choose give our technology a kind of inner life.
Recently, I was trying understand the concept of reinforcement learning and immediately got tangled in its lexicon. Every explanation I found online kept mentioning a “reward” that encourages the system towards the desired behavior. The phrasing made it sound almost biological, and I kept wondering why a machine would care about a reward at all and what the equivalent of dopamine might be for something made of code. Only after reading a couple of ELI5 Reddit posts did I learn that a reward is simply a number that the algorithm tries to maximize.
Language, after all, shapes the edges of what we can imagine. The linguists Edward Sapir and Benjamin Lee Whorf proposed that the structure of a language influences how its speakers perceive reality. While the Sapir–Whorf hypothesis is still debated, the idea has always fascinated me. It suggests that our vocabulary for intelligence subtly defines what we think intelligence itself is. Call AI a brain and we measure its IQ, a tool and we expect obedience, a colleague and we start sharing decisions. Every word opens a different kind of relationship.
We keep describing AI in human terms not because we’re confused about the technology, but because language gives us no better way to describe a pattern that looks like thought but isn’t quite thought.
Humans have tendency to see faces in clouds, voices in noise, or Chihuahuas in blueberry muffins (I'm not kidding). Cognitive scientists call this pareidolia. The mind is an overzealous pattern detector that prefers false positives to missing the real thing. Evolution probably wired us that way since it's better to imagine a tiger in the bushes than to miss it.
AI is a statistical system trained to find patterns everywhere. When we interact with it, two pattern-seeking systems meet, and the result is we project ourselves. We think it feels and knows because it predicts well.
The philosopher Gilbert Ryle coined the phrase “the ghost in the machine” in 1949 as a jab at René Descartes’ idea that mind and body were separate entities, and that a spirit guided the physical self. In a curious turn, we have now placed a kind of ghost inside our computers in the form of AI.
Personally, I find the metaphors almost endearing. We anthropomorphize because we’re relational creatures, and have an instinct to search for familiarity in the unknown. A century ago, people spoke about electricity with religious awe. Now we barely notice it humming through our walls. AI will probably follow the same path, from miracle to metaphor to ordinary infrastructure. Until then, our vocabulary will keep stretching to match the pace at which AI is evolving.



