Falling behind on AI — Kelford Labs Weekly
And how to keep up.
“I feel like I’m falling behind.”
The second time a friend said that to me, I realized something was happening.
Both of these friends hold high-level positions in world-famous organizations. Ones you’ve heard of, ones you use or even watch on TV.
And both of them felt like they were falling behind.
On what? Well, of course you already know: On AI.
I found this strange, at first, because I didn’t feel like I was falling behind, and these folks are both considerably smarter than me.
So why was I feeling somewhat comfortable while they were feeling worried?
I set out to find the answer, and I went on a small listening tour, talking to business leaders, agency owners, consultants, and subject matter experts.
I wanted to find out what they were seeing, hearing, and doing about AI. How it was affecting their lives and work, and whether they felt like they were slipping, or keeping up—and why.
This is what I learned.
It’s About People
The single biggest predictor of whether someone is anxious or excited about AI—according to my interviews—is surprisingly simple:
Whether you manage a team or not.
And anxiety is proportional to how many people you manage.
Because, well, of course! Any technological change is, by definition, change. And change is harder, perhaps even exponentially so, the more people who need to change.
Or who are already changing, whether you like it or not.
One of the most common expressions I heard first-hand or related is, “How do I teach my team to use a tool I’m not using?”
Managers are finding it more difficult to experiment with AI because less of their work directly benefits from it. And more of their work presents dangers for using AI, like confidentiality risks and errors of bias.
So junior team members are using AI more—or have more tasks which could benefit from it—and managers feel like they’re out of the loop.
How are managers supposed to help their teams using this tool responsibly—or make sure they’re not using it on certain tasks—when they aren’t up to speed on its capabilities or risks?
It’s About Practice
It’s sort of funny: The common conception of AI is that it helps you do work faster. And, in certain tasks, that’s probably true.
But several of the people I talked to went out of their way to express this point: AI doesn’t make them any faster—but it does make them a little bit better.
It’s the kind of thing you’ll only learn if you’ve experimented with the tools yourself.
A classic first step is to prompt it to write you a document.
You see its output and think, “Wow, that was fast.”
And then you read it.
And you think, “Wow, this is bad.”
Or, okay, as the models improve, not bad. But not right. Certainly not finished.
And so you edit, you iterate, you improve. Eventually, you’ve got something good and right but you wonder if the AI actually saved you any time at all. Reading the document, you wonder if you couldn’t have done it faster and better by yourself.
The next time, though, you do. You write the first draft of the document. And then you paste it into ChatGPT or Claude and you ask it to pick it apart.
You ask it to point out where it’s unclear or confusing.
You ask it to show you where it could be more personable and less stuffy.
And then you edit the document based on its feedback. It didn’t save you any time, but it made the work better, faster than you could have alone.
That’s specific to writing, but as one of the people I spoke to said, “The best results come from doing the upfront work yourself to get a better outcome” across many different types of task.
So, okay, it does make you faster in that it allows you to do a higher quality of work than you could have in the same amount of time. But, in many tasks, it doesn’t help you do what you’re already doing any more quickly—it just makes you better more quickly than you could have or would have without it.
And that’s the thing: It’s all task-dependent. The ultimate promise of the AI labs trying to build a “digital god” is AGI—the point at which AI can do any human task as good or better than a human.
But we aren’t there yet (and many people, yours truly included, debate the very premise), and for now it’s all very much about what you’re trying to accomplish. Some things AI can do faster and at a higher quality than any person already.
But most things it can’t. In most domains, it’s a better editor or second opinion than the primary driver.
One interviewee mentioned the necessity of breaking up large tasks into individual pieces—one prompt at a time—to get any quality output from the LLM. It essentially required an instruction manual to be able to accomplish the task—which any skilled practitioner would understand innately.
But then they mentioned how it felt weird to be managing a machine instead of a person.
It’s About Principles
The as-of-yet-unspoken element here, though, is principles.
I’m talking about this stuff as if it’s inevitable, and yet there are plenty of people who believe it’s going away. Or at least that it should.
But one of the striking things I heard in these conversations is about how much it’s already here.
More than half of the people I spoke to talked about their competition using AI to either churn out low quality work (and create massive downward pricing pressure in the market), or their former clients turning to AI instead of to them.
Of course, I was soliciting conversations about AI, so my subjects were selected for AI awareness, but I still found it notable. AI’s impact isn’t hypothetical, it’s here.
And yes, it might all go away—or at least become prohibitively expensive for most of us.
And yes, based on a number of principles like intellectual property, misinformation, and non-consensual deepfakes, there’s a strong argument it should go away.
But, for now, it’s here. And it’s only just beginning.
So a lot of the anxiety about AI is existential for entrepreneurs and employees alike. To feel like you’re falling behind in a race against a thing coming for your job is stressful, to say the least.
Meanwhile, that thing is socially problematic and born from an “original sin” of massive intellectual property infringement. To many—most?—it just feels wrong.
It reminds me, slightly, of the furor over social media in the mid-aughts. I remember the traditional media looking down its nose at it, insisting it was nothing, then barely anything, and then a threat, and then a crisis.
Well, social media sure showed them, eh..?
But who would ever argue the implosion of mass media was a pure good? And yet here we are. And I wonder if we’re in the same boat again.
We look down our noses at AI, at its costs, its constant crises, and we think, surely not. Surely this ain’t it.
Surely..?
So there’s a decent and fair argument to say, “I’m not going to use this stuff. It’s wrong and you can’t make me.”
I’m extremely sympathetic to this argument, and I have my moments making it, so all I’ll say is this: Avoiding something is the first part of a plan, but it’s not the whole thing.
What are you going to do instead of it? What are you going to do because of it? What are you going to do despite it? These are the important questions to ask.
Of the options before us, none is labeled, “Just continue to coast as is.” But one is labeled, “Counterposition against the downsides of using the tool.” Another is labeled, “Seek out customers with similar principles and demonstrate your unique value to them in particular.”
And so on.
But the thing is, we’ve got to know the valuable parts of our business, so we can promote them and amplify them to the people who care about them most.
Personally, I don’t use generative AI for client work—if you pay me to create ideas for you, or write words on your behalf, I don’t believe you’re implicitly approving the use of AI to create those ideas or words.
But I do use AI for refining my process, generating content ideas, improving my writing skills, and performing data analysis.
It’s about knowing which parts of the process it’s useful for, and which parts it isn’t.
It’s About Process
So here’s a thread I heard in every conversation: It’s about process.
To benefit from AI, you need to understand your process.
To avoid the mistakes and risks of AI, you need to articulate your process.
To mitigate over-reliance on AI, you need to monitor your process.
And to create value with or without AI, you need to improve your process.
One consultant I spoke with mentioned that the main burden of helping organizations improve their processes is the level of customization. Everything is bespoke and specific to each organization, making it difficult to streamline or productize his service offering.
I think that’s part of why managers have trouble finding ways to learn and experiment with AI.
The problem isn’t just that organizations have unique processes, it’s that they have complicated and under-documented processes. So no consultant—or AI tool, for that matter—can slot into their process and hit the ground running.
I have a saying: Your process is your position.
Is that a bit of an exaggeration? Sure, I’ll grant that. But I mean the sentiment sincerely:
The way you do what you do is the purest expression of why you do what you do.
An unclear process demonstrates unclear thinking. Unclear value.
But when you understand the way you do what you do, and can articulate why you do it that way, you demonstrate clear thinking and clear value. You demonstrate credibility.
And when you know what you do and why you do it, you can see opportunities for improvement. For an AI tool or an outside service provider to integrate with your workflow and optimize particular areas of value.
So it starts with your process.
If you’re anxious about AI, ask yourself this: Can I clearly articulate my process to another person in a way anyone could understand?
If we can’t, we’ll struggle to find ways to improve, and we’ll certainly struggle to find ways to integrate new technologies.
Because it means we’re not sure where our value is. We’re not sure what the important parts of our process are—the ones we’d never hand over to AI, and the ones we’d be crazy not to.
Managers especially struggle here, but something one of my interviewees said jumped out at me:
“If you can’t get AI to do good work for you, you’ll struggle to get people to do good work for you, too.”
The AI advantage is not replacing people with an LLM—it’s equipping people with a tool that can make them better, faster than they ever could on their own.
It’s About Patience
Now, I’m about to give what could sound like advice. So let me say clearly, I am not a change management expert. I don’t know how—or even if it’s possible to—completely mitigate the threat of a technology like AI when it comes to our work or our business.
But I went into my conversations with these business leaders expecting to learn what was making either them, or their clients, anxious about falling behind in AI.
And yet what I took away was a set of lessons I want to apply to myself to make sure I’m keeping up in the ways that matter to me.
One of the people I spoke with mentioned that a lot of AI anxiety comes from the constant deluge of information and releases.
It’s hard to stay calm when you’re completely overwhelmed with news.
The good news is, no one could ever completely keep up with everything new in AI. I do my best over on LinkedIn to make videos when something really big happens, but it’s really a tiny fraction of the daily updates.
Which means, you don’t have to try to be completely in the know. No one is, no one can be. Let yourself off the hook.
Just find a few sources you like and trust and pay attention to what they’re talking about. If you’d like some more information here, reply to this email and I can share some of my favorite sources with you.
Because here’s the big point, made again and again in my conversations:
This is a challenge of change, not of technology.
The problem isn’t that we have a new tool to learn, it’s that we have a new way of working to consider. We have an adjustment to our process to adapt to, which can feel physically painful.
So we need to be patient with ourselves and others. So we can make slow, steady progress in the right direction.
Because, let’s face it, if we can adapt to AI, we can probably adapt to a lot. But if we can’t, or if we won’t, we may be holding on to a piece of the past more than a principle.
That’s the challenge before us: Learning to adapt our process to whatever happens, instead of trying to defend the one we have.
It’s the challenge of finding a way to use or avoid AI such that your people get better than ever before, better than the competition (if that’s your motivator), and better at adapting to new technology than any other team out there.
It’s the challenge of defining, articulating, and understanding your process for you and your team.
Then, it’s the challenge of practicing and experimenting to find the features and the flaws of these tools so you avoid mistakes and maximize the benefits.
Finally, it’s the challenge of understanding why you do what you do so you know what you’d never sacrifice, and what you’re more than willing to spend less time on.
Those are some of the lessons I took away from my recent conversations. I hope you find them interesting, and perhaps even useful.
But keep this in mind:
This is a complicated, fraught topic with a million arguments to be had and a million tradeoffs we can never fully understand.
So if your own feelings on the subject are unclear or complicated or ever-changing, I think that’s fair. I think that’s normal.
I think that’s how things will be for a while.
But what do you think? Reply to this email to let me know.
Kelford Inc. shows you the way to always knowing what to say. Marketing positions and messages for hands-on entrepreneurs.