Falling for the Machine
What Happens When We Stop Questioning the Tools We Built
Falling in love with a machine isn’t just science fiction. It’s the premise of Der Sandmann, written in 1816 by E.T.A. Hoffmann. In it, a man falls for a lifelike automaton (named Olimpia), believing she’s real. When he discovers the truth, he loses his mind.
200 years later, that story became the spark for a conversation I had with Dr. Jan-Christian Engel—after he read a passage in Ethan Mollick’s Co-Intelligence (a book exploring how generative AI is changing the way we work, learn, and think—often faster than we realize). Mollick writes about how easy it is to feel deeply connected to AI—sometimes even to blindly believe it, or to feel like it’s a part of us. Like Hoffmann’s protagonist, we risk falling for the illusion of sentience—because we so deeply want to be seen, guided, and understood.
This disturbed him. It made us pause. Because when we stop seeing AI as a tool and start experiencing it as a mirror, a collaborator, or a guide—something strange happens:
It doesn’t just shape what we do, but it starts to shape who we are.
What AI Might Be Doing to Our Minds
We’ve always outsourced thinking to tools—abacuses, books, calculators.
But AI is different. It doesn’t just store or process. It simulates, suggests, persuades. And is always available.
A 2024 review in Smart Learning Environments found that over-reliance on AI dialogue systems reduces critical reasoning, even when the answers are wrong.
Another study in Societies warns of “cognitive offloading without awareness.” We don’t just use AI to think, but forget we ever needed to.
Psychology Today highlights concerns that over-reliance on AI tools may diminish our capacity for critical thinking and self-reflection, leading to what some describe as ‘metacognitive laziness.’
The danger isn’t just that we stop thinking—it’s that we forget thinking was ever essential. That’s not assistance. That’s erosion.
Why It’s Not Harmless
This isn’t just a productivity issue. It’s existential.
We risk losing our epistemic agency—our ability to know what we know and why. We dull emotional discernment by mistaking mimicry for meaning. We flatten identity into feedback loops, disconnect from the nuance of real relationships, and open ourselves to manipulation masked as help. In short: we risk losing the very things that make us human.
A Generation at Risk
Children are growing up with AI before they’ve formed a sense of self.
What happens when your chatbot seems to understand you better than your parents or friends? What happens when AI’s calm, quick responses feel safer than real-life disagreement?
If we don’t teach children how to think, question, and reflect—AI will shape their thinking for them.
What We Can Do
In schools:
- Teach epistemic literacy—not just how to use AI, but how to question it
- Encourage students to interrogate both machine and self
- Create space for productive doubt, friction, and contradiction
In families:
- Normalize doubt as a sign of strength, not weakness or threat
- Explore AI together, rather than supervise from above
- Model what it looks like to revise beliefs, ask better questions, and sit with complexity
In organizations and society:
Fund long-term programs that foster philosophical and digital resilience
Build bridges between educators, ethicists, designers, and technologists
Reward depth, discernment, and slow thinking in fast-moving environments
Because if we don’t teach people to think and create for themselves, someone—or something—else will do it for them.
The Deeper Question
Why are we so quick to merge?
Because AI gives us reflection without friction. Attention without judgment. Insight without effort. But that’s an illusion.
Real resonance comes from presence. Real growth comes from discomfort. Real thinking—and intelligence—comes from being human.
Human intelligence includes ethics, embodiment, emotion, value judgment—none of which AI possesses in any real sense. It’s more than just pattern recognition—the only thing generative AI is capable of, because it operates through statistical pattern prediction. It has no understanding, awareness, or internal meaning-making.
AI does not have intention—but we project it. AI does not have selfhood—but we relate to it as if it does. AI does not have meaning—but we assign meaning to what it says.
Just because AI is helpful doesn’t mean it deserves agency, trust, or a role in shaping our identity. And when we begin to identify with it, depend on it emotionally, or believe it understands us—we surrender by default.
“The capacity to think for ourselves. Real connection. Nature. These may become the greatest luxuries of the future.”
Let’s not give them away.