Don’t Underestimate Narrow AI

Jeff Keltner
3 min readOct 24, 2023
Image created by DALL·E 3 through ChatGPT interface.

AI is a hot topic these days — though what exactly counts as AI is often debated. I’m sometimes reminded of the quote often attributed to Alan Kay “Technology is anything that wasn’t around when you were born.” There is so much truth to that quote — and I think AI is experiencing a similar shifting of the goalposts. After all, early AI systems, often called “expert systems” were essentially programmed with a large set of knowledge and rules. Unlike today’s AI systems, they did not “learn” from data so much as they were simply given a set of information and rules. Today, that would almost certainly not qualify as AI under almost any definition.

Similarly, the discussion around AI seems to mostly be centered around what is generally known as Artificial General Intelligence or AGI. AGI has the goal of building a broad-based human-like intelligence that can reason across a broad range of topics or areas, importantly including those it was not specifically trained on. There is much debate about how close we are (or aren’t) to AGI, with the most optimistic believing we are just a handful of years (or a few training iterations of GPT) away from a superhuman general intelligence while the skeptics believe we are not even on the right track and essentially need to start nearly from scratch to build a whole new approach. There are deep experts with long histories in AI research and application on both sides of this debate.

Interestingly, while LLMs that you can talk to about nearly any topic feel like something close to an AGI, many of the most impressive AI applications from the last few years do not — and don’t even have the objective of doing so. Image recognition models, automated driving systems, image and video generation models — none of these have a generalized intelligence as one of their objectives. And yet, I think most of us think of them as AI.

I make this point not to be nitpicky, but because I worry the debate about AGI and how close (or far) systems like ChatGPT, Claude, Bard or other LLMs may be from that objective can distract us from the very real and immediate opportunity to apply specific or narrow AI systems today. As these types of models come down in cost and complexity to develop and deploy, the opportunities to leverage them are almost limitless. Almost every business, government, or non-profit has the opportunity to become more effective at achieving its objectives and serving its stakeholders through the use of AI models — and none of them really need to meet the criteria of AGI.

So, while it can be fun to debate the nearness of generalized intelligence — and what the implications of a superhuman AGI might be (I have plenty of thoughts on this too, but I will leave those for other posts) — let’s not forget to leverage the AI abilities available to us today to make this world we live in a better place. The technology is here today, just waiting to be deployed.

--

--

Jeff Keltner

father, husband, entrepreneur, geek. love fintech, edtech and startups. ex@Upstart ex-@google, ex-@ibm. studied computer engineering @stanford