Let me start this post by saying I am a proponent of AI.
I mean, a BIG proponent. I use it daily - across planning, writing, coding, troubleshooting - you name it. If there's a way to use AI to speed up my work or sharpen the result, I'm using it.
So let's get this out of the way:
Yes, AI will cost some jobs.
But
it will create others too. That's how progress works. It's no different than when automobiles replaced horse-drawn carriages, or when TV displaced radio (mostly). The scale might feel different now because computers touch every corner of our lives - but the pattern is the same.
That said...
AI is not where it needs to be.
In fact,
it's not even close.
And remember - that's coming from someone who genuinely believes in it.
The problem, I think, is that the AI industry has
misled the public with its own language. They anthropromorphized it. They made AI
seem human, and as a result we expect AI to be human. We've all heard the phrases:
⫸ AI "thinks"
⫸ AI "reasons"
⫸ AI "lies" or "hallucinates"
None of those are true. Not technically. And certainly not in the way those words imply.
All of those verbs suggest
intent - and intent is biological. AI isn't a being. It doesn't "know" things. It doesn't choose to lie. And it definitely doesn't have a secret agenda to hallucinate function names in code just to mess with you (even though it can feel like it).
What AI is, is a massive statistical model.
You give it a prompt, it breaks that input into tokens (think words), and then - based on an incomprehensibly huge amount of training data - it calculates the most
statistically probable next token... and then the one after that... and so on, until it generates a response.
That's it.
That's the whole trick.
Yes, it's a technical marvel. Yes, it feels like magic.
But it's still just probabilities stacked on probabilities.
Here's an example I had a couple of weeks ago.
I was debugging a small issue in some code. I uploaded two relevant files and explained the problem clearly.
AI responded confidently:
"I see exactly what the problem is. The function XXXXX doesn't call the proper logging function."Then it even printed out the full function and named the missing logging call.
Sounds great, right?
Except it was entirely made up.
The logging function it referenced? Doesn't exist anywhere in the codebase.
The main function it quoted? Also doesn't exist.
Not in the files I uploaded. Not anywhere in the system.
But here's the thing:
it wasn't lying.
And it wasn't "hallucinating" either.
What it
was doing was skipping over the files I gave it in an effort to provide a fast answer and it tried to generate a
plausible answer based on my question and its training. It wanted to help, and the Large Language Model (LLM) made a guess. A very confident, very wrong guess.
It took three more prompts before it actually started reading the files I had uploaded and giving me something useful.
So where does that leave us?
Is AI ready to take over some content writing jobs?Yes. It already has.
Is it ready to take over programming jobs?Not yet.
Right now, it's an amazing
programming partner.
I've done projects recently that would've taken me two to three weeks by myself -
but with AI's help, I finished them in under a day. You can't argue with that kind of efficiency.
But I still had to guide it.
I still had to fix its misfires.
I still had to know when it was confidently wrong.
AI is powerful, but it's not autonomous. Not yet.
Here's the simple truth:
AI isn't Tony Stark's JARVIS.It's not a sentient assistant.
It's a supercharged autocomplete engine - one that can be astonishingly helpful when used well.
But if you mistake its confidence for comprehension, you're going to run into trouble.
I'm still a believer.
But I also believe in knowing exactly what you're working with.
And AI, for all its promise, still needs a
real human mind at the controls. And I suspect it will for a long time.