The irrational exuberance of venture investors and technology pundits is a well known trait. One has to look no further than the rush to invest in and prognosticate on a field of largely undifferentiated language models for evidence of this behavior.
Signs of a turn in enthusiasm are beginning to crop up. Early companies have begun to falter (see Stability). A sea of early stage “AI” companies have gone quietly into the night as new models are released (See any number of “OpenAI killed my startup” posts). Founders see greener pastures with large companies (see Inflection AI). This is a sure sign that the overheated expectations of the past eighteen months are cooling.
This is a good trend. Less focus on the technology and more focus on the problem space is what will create value for individuals, investors, and businesses. But it also misses the whole damn point.
LLMs and diffusion models are not products. The input boxes on OpenAI and other generative technologies are barely products. What we really need are products that solve meaningful problems. The AI advances of the past few years will be a part of that solution, but they are not the solution.
Clickbait
Prognosticators are starting to write about this shift but are largely getting it wrong. Most of the writing seems to be focused on some version of “AI is not the answer”.
An example that caught my eye recently was the following:
Here’s why AI search engines really can’t kill Google
It’s the clickbaitiest of clickbait titles. And it worked. I clicked.
The article goes on to illustrate why today’s text-generative search technologies do a poor job at a variety of the most common searches. Of course, they do. That’s because AI language models are not a search engine. AI language models are not any kind of product. They’re a technology that does a very specific thing that few people really understand.
At its most basic, generative language models are word sequence predictors. Word sequence prediction is a great feature for the user interface of a search engine. But it’s not a search engine. It may be a great way of delivering the answers for some types of searches, but as the article points out, not all kinds of search queries. That’s no surprise. The input box on a language model is a demonstration of a technology, not a product.
Are language models search engines? No. They were never intended to be. Are they part of the future of search? Yes. A big part of the future of search. You don’t even have to squint to see that.
Every Product will be an AI Product
The best evidence for language and diffusion models not being products is existence of prompt engineering. In order to get something meaningful out of these models, you have to learn how to ask effective questions.
For Midjourney, you’re working in a group chat service, Discord, writing a prompt that feels a whole lot like something other than English.
/imagine prompt: a puppy looking at an empty food bowl with a sad expression. Angry cats are in the background --aspect 16:9
It is a really good programming language but a programming language nonetheless.
So, if language and…oh, you want to see the output of that prompt? Here it is. The cats don’t look all that angry. And they look like kittens instead of cats. And there’s food in the picture. I must not be a very good prompt engineer. Not much of a product.
So, if language and diffusion models are not a product what exactly is an AI product?
First, there’s no such thing as an AI product, just like there’s no such thing as a Python, Django, or Node.js product. AI is a family of technologies, much like Python and JavaScript frameworks, that exist to help front-end and back-end developers build products.
Second, by the universal law of opposites, if AI is not a product, then every product is an AI product. AI capabilities, whether it’s “simple” machine learning models or the latest diffusion models, will work their way into everything we do. But it’s the “doing” that is the important part of every product, not the technology that enables the doing.
This fever dream about AI that we’ve been having for the past 18 months is already turning back to the things that matter: solving real problems for real people with the best technology available. Creating puppy pictures was never a real problem.
For most people.
Google is dead. Long live Google.
Returning to what got me initially all worked up on this topic: Google is, in fact, dead. I predicted this way back in December of 2022.
My concern back then was whether Google could overcome the innovator’s dilemma. I suspected that they could not. As 2022 turned into 2023, Google’s response to this new threat was sluggish. This is despite the fact that Google was the early innovator in the transformer space. They literally wrote the book on transformers which are at the heart of the GenAI revolution.
Surprisingly, Google finally overcame its cultural aversion to risk-taking and released Gemini. I suspect it was due to the existential threat that GenAI presented to them in the form of Perplexity, Phind, and a host of other emerging competitors.
Gemini is as good at word prediction engine as any other language model. In fact, I think it’s better in many cases, and I’ve begun to use it as my default. I especially like how they’ve integrated references to source materials so I can check the model’s accuracy.
Google’s search engine now includes Generative AI for some queries. Many search results have generative responses above the paid and organic links. This could have longer-term implications for Google’s primary business—advertising—but they’re willing to face that risk to avoid being on the wrong side of innovation.
I predict that in five years Google’s market share will be smaller, though not by much, and that search results will look more like Gemini’s output than Google’s current results page. Google survives and continues to win.
Google will have to work out the advertising model, though that doesn’t seem like a high bar. Deals with content providers who may not be getting as much organic traffic will be more tricky. If Google provides all the answers, how will people find websites?
Google won’t replace all search results with generative results. Generative results can be effective for informational queries and early transactional query journeys. For example, if I want to know how coffee makers work or the latest advances in coffee maker technology, a generative answer may suffice.
But if I want to buy a coffee maker (transactional, end-of-journey) or find my local Target (navigational query), then generative responses are less useful, and more traditional search results may fit the use case.
One thing that tickles my brain is whether we’ll see the return of curated search. Way back in the last millennia, Yahoo hand-crafted search directories on every topic. While modern search engines, namely Google, replaced link directories before the turn of the millennia, DMOZ existed until 2017. Could there be a place for curation—even automated curation—in a world where answers are everywhere but information is harder to access?
Probably not.
AI is not
I’m sure we’ll continue to see all manner of “AI is dead”, “GenAI is not search”, “AI is not art” articles written. While all or none of those things may be true, it misses the point. AI has never been nor ever will be the solution. The solution will be the solution.
People and businesses have problems. These problems will require solutions that are built with the right stuff to make the problems go away. Some of that stuff will be technology. Some of that technology will be AI. AI is the thing within the thing. It’s not the thing.
The sooner we universally grasp this point, the sooner we’ll be investing human and financial capital in ways that improve people's and businesses' lives. Until then, investors will give money to companies so they can give money to GPU and Cloud companies. Thankfully, that trend seems to be declining. None too soon.
Update: I got rid of the kittens and food (mostly). At least the cats look angrier though the puppy looks a bit too sad. Maybe it’s all that anger.