A common thread in early 2025 is “AI sucks at X or Y”. For instance, “AI sucks at coding. It introduced a bug and couldn’t even fix it!” Or “AI sucks at image generation. Look at how often it gets the text wrong!”
Well guess what, we all sucked at things at one point. At 5 years old, I could barely understand simple multiplication and division! Can you imagine that?
But over the next 5 years, I learned a lot. Enough that when I was 10, I taught myself how to program a computer in BASIC, basically (pun intended) just from reading the manual and experimenting.
4 years later, I taught myself programming in C++. No formal education whatsoever. Also just from reading books and manuals and then practicing what I’d learned.
At 16, as a highschooler, I built custom job costing software on spec that ended up being used for decades.
Once at university, I spent several of my internships building software at multinationals like Corel and Microsoft.
In the subsequent years, I’ve built on that knowledge and branched out into the art and science of business, management, and entrepreneurship.
How did this progress happen? Training a neural net, ie. my brain. Feeding it information. Trying things. Identifying which things worked and which didn’t. Doing more of the things that worked and less of the ones that didn’t. Multiply that over weeks, months, and years, and you have substantial progress. Even with a slow, low-wattage meat computer.
Now look at AI. Back in 2020, the best (public) LLM-based AI was still pretty dumb. If you would’ve described ChatGPT 3.5’s capability then, and asked me how far out we were from building that, I would have guessed a good decade at least.
And yet, in late 2022, OpenAI shocked most observers with technology that seemed to effectively mimic human language processing with a huge knowledge base.
A lot of people (myself included) thought this might be a parlour trick or some sort of hack, but seeing is believing, and after using it for a while, it was impossible to deny what they’d achieved.
A little over 2 years later, staggering progress has been made, to the point where the technology is already cheap and ubiquitous, and some of the new models are showing characteristics of “advanced reasoning”.
Meanwhile, one could argue that even the original ChatGPT was “smarter” than most humans on some measures. How many people in the world can answer a question in any arbitrary field of knowledge and be right most of the time?
It took me about 10 years to go from stumbling my way through writing working BASIC code to making meaningful contributions to massive codebases in a complex programming language.
Current public LLMs are probably 75% through that journey.
And consider that the amount of computing power we have to train and run these models is just unbelievable, and increasing exponentially.
How quickly do you think they’ll improve?
—
P.S. You’re probably wondering, was this written by AI? Nope. Not even edited or enhanced. 100% hand-written the old-fashioned way. Probably could have improved it substantially using AI but decided to leave it in its raw, unpolished form just for nostalgia. 😀
—
Oh, and check out the following video:
Yes, it sucks. But this was just a quick, free automatic generation. Not even state of the art.
How quickly do you think they’ll improve?