AI is not as smart as you think

Uday PB
4 min readJan 26, 2024

Generative AI has taken the world by storm, and this is just the beginning they say. The underlying concept behind it all is gigantic machine-learning models trained on approximately 50 billion tokens.

If we consider one word to be a token, then it would be safe to say chatGPT or any recent generalist GPT model that claims to have a deep understanding of everything a human can search for on the internet is trained on.

Now where did this training data come from? The internet. These gigantic models were trained by scrapping the entire internet, one web page at a time.

The way you go about scrapping the entire internet is by first clicking through one website, then scrapping data of all the webpages linked to that site and so on. After crawling through millions of spiderwebs of web pages, you would finally have a training set that can be cleaned and sorted through.

Then it's time to utilize enormous computing resources and data centers to feed this data to the ever-hungry machine learning models. Contrasty as it may sound, machine learning models are dumb.

How do you ask? let's answer that in a moment. These GPTs claim to be superior in terms of knowledge, language understanding, inferencing and problem-solving. Great, you know who else claims to be an expert? Us humans. We created these concepts and words for that matter and have evolved so that we are pure and pristine learners. Most of the knowledge trigger is encoded in our genes, given to us by our beloved ancestors who were experts at something or the other, but were experts at “survival of the fittest”.

Anyway, now a human does not need to absorb 50 billion words to learn these skills, we are way more superior and efficient at learning new skills and capabilities. Our rate of evolution, which may seem slow in the face of the AI revolution, is faster than AI’s evolution rate. How?

Well, it's easy to overestimate the number of years AI took to improve, but now, picture the training process of a GPT model.

It’s like throwing every book in the Library of Congress into a blender, pulverizing it into wordy dust, and feeding it to a hungry beast. Sure, the beast can regurgitate paragraphs that sound eerily like Shakespeare or spit out code that resembles Python poetry, but does it truly understand these creations? A parrot can mimic human speech, but it doesn’t grasp the intricacies of language and intention. GPTs are masters of mimicry, not meaning.

This isn’t to say AI holds no value. Think of it as a powerful tool, like a hammer. A hammer can build skyscrapers or crush fingers, depending on the wielder’s skill and intent. AI deserves the same cautious respect. Instead of succumbing to the hype of AI’s supposed omniscience, let’s focus on understanding its limitations and utilizing its strengths responsibly. After all, the smartest mind in the room isn’t just the one with the most data, but the one who knows how to use it wisely. And on that front, dear reader, we humans still hold the advantage.

There is also this underestimating problem of over-fitting. Imagine overfitting like cramming for a test by memorizing every single practice question, word for word. You might ace the actual exam, but stumble upon any slightly rephrased question or real-world application, left clueless by the rigid rote learning.

That’s essentially what happens to certain AI models, especially GPTs. They get so fixated on replicating their training data, the intricate web of internet text, that they lose sight of the bigger picture. It’s like memorizing every leaf on a tree, and then failing to recognize the same tree in a different season or location.

This overfitting leads to brittleness and a lack of generalizability. The AI might churn out grammatically perfect sentences or even mimic creative writing styles, but throw in a factual error or unexpected scenario, and it crumbles.

It’s like a chef who can only follow recipes to the letter, unable to improvise or adapt to a missing ingredient or dietary restriction. So, while AI might impress with its vast vocabulary and mimicry, true understanding and adaptability remain firmly in the human domain.

Remember, intelligence isn’t just about knowing a lot; it’s about knowing how to apply that knowledge flexibly and creatively in the real world. And that, my friend, is where the human advantage truly lies.

Maybe AI will take 50 more years to evolve and develop its intricate intelligence, and then and only then it may think of attacking the human race “intentionally”. Until then, it's just a mad scientist who's ready to help us accelerate advancements in many fields but may show signs of being harmful intermittently, but how can you blame a mad scientist after all?

--

--

Uday PB

Above the ground today, below the ground tomorrow. Psychology, philosophy, and maybe code - my trifecta, follow for musings on such topics.