AI is a powerful—but dangerous—tool. In many ways, it’s the tool of tools—the most significant we’ve ever created. And yet, we’re still reeling from the effects of earlier tools: industrialization, the automobile, even the mortgage. Each of them changed our way of life—for better and for worse.
Tools are never neutral. They carry the assumptions, purposes, and blind spots of the culture that made them. More importantly, they carry unintended consequences. Some call this the Frankenstein Principle: we build tools to serve us, but over time, those tools reshape us in ways we never saw coming.
AI is here, and it’s not going away. I’m both excited by it and wary of it.
Why the wariness? Because tools change us—and rarely in ways we anticipate.
I was chatting with Bnonn about this last week. I’ll probably butcher how he said it, but the gist was this: whenever you encounter a tool, ask what problem it’s trying to solve. In the case of AI, it’s trying to code for you, write for you—even think for you.
That ties into one of the most convincing concerns I’ve heard—this one from Wiley: AI will inevitably lead to skill atrophy. And not just in one area, but across the board. The more you outsource, the less you practice. The less you practice, the more you lose.
So yes, I’m excited. But I’m also wary.
Why am I excited?
In the immortal words of Alex Jones: “I’m kinda retarded.”
I’ve had what you might call “learning disabilities” my entire life—dyslexia, dysgraphia, and some serious memory issues when it comes to spelling and pronunciation. In the documentary Holy Rollers, there’s a scene where I’m trying to figure out how to pronounce “ameliorate” using a website. What cracked me up is that I had to look it up again—the day the film debuted—five years later.
There are just some things I don’t retain, no matter how hard I try. I’ll practice a word Sunday morning and still forget how to say it once I get in the pulpit. I scramble for a synonym on the spot. If you’ve followed me for a while, you’ve probably noticed the evidence of my dysgraphia. My brain does weird stuff when I write. I’ll skip words. Confuse form with from, some with so many, numeral with number. I usually catch it on a second read—but not always.
These “disabilities” shaped me. To learn to read, I had to rely on deduction and contextual clues. And oddly enough, there’s a correlation between high performers and dyslexia. One theory is that since traditional learning is frustrating, you’re forced to come up with alternative ways of getting things done. That tracks with my experience. I’ve always been a quick study—mostly because I had no choice but to really pay attention. I’ll come back to that.
If I wanted to communicate through the written word, I had to grow thick skin. People rightly pointed out my spelling and grammar issues. So I learned to laugh at myself. Grammar Nazis don’t faze me. They’re usually trying to help—and even when they’re not, who cares? Without my willingness to create, they wouldn’t have anything to correct. All the ways this stuff held me back? They also shaped who I am.
I bring this up for two reasons.
First, I write a lot. I’ve never stopped. Since college, I’ve been putting my thoughts on the screen. It started with email devotionals I sent out several times a week. One was a long reflection on why switching to cashierless checkouts at Kroger was bad for the community. They were full of errors—but also full of original thoughts on random topics.
I never journaled. But I did manuscript my sermons and talks, and I’ve taken notes on every book I’ve read over the last twenty years. Most of it lives in old Gmail drafts or Apple Notes. It’s all written in what you might call my slightly-retarded Fosterian shorthand. The idea that formed the first chapter of It’s Good to Be a Man started as a morning reflection I wrote in the summer of 2009.
And I’m still writing. More than ever. Thousands of words a week. They’re unpolished. Rough. Full of imperfections.
But they’re mine. My mind generated them.
And this finally brings me to why I love aspects of AI.
It gives me something I’ve never had before: a high-level editor, on-call, 24/7. It helps me take my 20 years of content backlog (and all the new stuff), identify the errors, and refine them into something readable—something ready for an audience.
Here’s how I use it: I write my drafts. I reread them. I make whatever corrections I can. Then I ask AI to “correct grammar and syntax” or “smooth this out.” I review its edits. I don’t always go with them, but they make me think. They help me see my blind spots. And over time, I’ve noticed I’m making fewer mistakes on my own.
So no—I don’t use AI as a ghostwriter. I use it as an editor. It helps me finish things that otherwise would’ve stayed in the draft queue for years.
That excites me.
But I do wonder: what if young dyslexic Michael had AI?
I wouldn’t have had to find ways to overcome difficulties. I might not have developed the very skills that helped me get ahead—and even set me apart.
Such is our dilemma.
This was really good. I’ve been coming to the same conclusions and haven’t found anyone else that has expressed this so clearly.
When you say tools are never neutral, do you just mean all tools are used to accomplish *something*?
Or that tools are never *morally* neutral?
The former makes sense to me (and is what I think you mean—it’s part of what defines a tool), whereas the latter would be confusing.
But normally when I hear people talk about neutrality regarding tools (wealth, etc) they’re making a morality argument in one direction or the other.