On AI: If you genuinely believe what we have now is remotely like a human being, you may have a surprisingly low opinion of human beings.
There is a sentence I keep hearing lately, delivered with an odd mix of amazement and alarm, usually by people who want you to know they are paying attention to the moment we’re in. “It’s getting really human now.” Sometimes they lean in when they say it. Sometimes they lower their voice. As if they’re sharing a secret. As if the machine might be listening.
I am constantly amazed at how casually we’ve begun to downgrade the complexity, depth, and lived reality of being human in order to make sense of a tool that is, at heart, extraordinarily good at sounding right.
Because when people say AI feels “human-like,” what they’re usually responding to isn’t intelligence in any meaningful sense. It’s fluency. Tone. Confidence. The absence of hesitation. The ability to produce something, anything, on demand, fully formed and politely delivered. In a world trained to equate articulation with intelligence, this feels uncanny. In a culture addicted to output, it feels miraculous.
Are we setting a low bar for Humanity?
Humans are not defined by how smoothly they speak or how quickly they answer.
They are defined by the friction between thought and expression. By the pause before saying something that cannot be taken back. By the memory of what happened the last time they spoke too quickly. By the quiet calculation of risk, consequence, and care. A human answer is shaped not just by what is true, but by what it will cost to say it.
AI doesn’t carry any of that weight. It doesn’t hesitate because it might be wrong. It doesn’t hold back because someone could be hurt. It doesn’t sit with uncertainty because uncertainty has never punished it before. It produces language without consequence, which is precisely why it is so good at producing language.
If that feels “human,” it’s worth asking what version of humanity we’ve been benchmarking against.
This Didn’t Start With AI. It Started With Us.
What’s striking is that this confusion didn’t arrive with AI. It arrived long before it. Corporate life, digital media, and performance-driven work quietly trained generations of people to behave in ways that are optimized for visibility rather than judgment. Speed over thought. Confidence over care. Reaction over reflection. We built systems that reward immediacy, and then we were surprised when a system designed for immediacy began to look familiar.
In many organizations today, the ideal employee is someone who responds instantly, fills the page, never says “I don’t know yet,” and always has an answer. Under those conditions, of course, AI looks human-like. We engineered our definition of competence to match its strengths.
This is not a technological revelation. It’s a cultural one.
The most common mistake critics make in this moment is to confuse output with intelligence. Humans don’t demonstrate intelligence by how much they produce. They demonstrate it by what they choose not to say, by when they interrupt the flow of a conversation to reframe the question, by when they resist the pressure to respond simply because a response is expected.
Judgment is subtractive. It’s knowing what to leave out. It’s knowing when silence is more responsible than speech. It’s knowing that being technically correct and being meaningfully right are not the same thing.
AI cannot do this, not because it lacks processing power, but because it has no stake in the outcome. It doesn’t carry memory as consequence. It doesn’t protect a reputation. It doesn’t feel embarrassment at a misstep or pride in restraint. It doesn’t learn because something mattered — it learns because something correlated.
And yet, rather than interrogating this distinction, much of the criticism around AI chooses the easier path. It debates originality, authorship, and authenticity while quietly accepting work cultures that stopped rewarding deep thinking years ago. It defends outputs instead of standards. It protects habit rather than interrogating how shallow our expectations of “good work” had already become.
In that sense, AI isn’t threatening humanity by replacing it. It’s threatening humanity by revealing how much of our professional output was already performative. Decks that looked thoughtful but weren’t. Strategies that sounded smart but changed nothing. Content designed to fill space rather than shift understanding. AI didn’t invent slop. It exposed how much of it we had normalized, polished, and shipped with confidence.
The irony is that the things humans still do better than machines are precisely the things modern systems stopped valuing. Sensing when the room has shifted. Recognizing when a sentence is correct but wrong. Carrying responsibility across years, not prompts. Integrating lived experience into decisions without being able to fully explain how the conclusion was reached.
What We’re Willing to Settle For
This moment doesn’t require us to choose between humans and machines. It requires us to choose between cheap humanity and full humanity. Cheap humanity is fluent, fast, confident, and disposable. Full humanity is slower, messier, opinionated, and accountable.
AI is extraordinarily good at cheap humanity.
If we want humans to remain essential, not symbolically, not nostalgically, but practically, we have to stop designing systems that reward machine-like behavior and then acting shocked when machines outperform us at it.
So no, this isn’t an argument against artificial intelligence. It’s an argument against lowering our expectations of ourselves.
Because if this is what we’re calling “human-like,” we should aim higher.


