Oz Blog News Commentary

The AI Gold Rush

March 8, 2024 - 13:01 -- Admin

Large language models (LLMs) have overrun commercial markets, more like a tsunami than a normal technical wave of interest. The topic is everywhere –news stories, blogs, podcasts, startup investments, analyst reports, hackathons, and government announcements. A virtual frenzy surrounds it.

If you possess a technical background, you might find this frenzy puzzling. The technological roots of LLMs go back many years. Yet, today’s experience looks like more than the continuation of a preexisting trend. Something in the zeitgeist changed recently, making entrepreneurs and financiers rethink and shift the direction of investment. You might be tempted to call this an AI gold rush. If you are old enough to recall them, you might compare this frenzy to the gold rushes during the dot com or PC booms.

There is a grain of truth to these comparisons, and significantly, the metaphor of a gold rush can help us understand the economics at work. It also can forecast the future (a little). But to reap those rewards, you must understand precisely how the metaphor works.

Thar’s economics in them there hills. Let’s explore them.

Gold rush.

A gold rush arises when three factors align: surprise, information spreading, and impatient economic actors. Let’s use the California gold rush of 1848 to illustrate those ingredients, then turn them on LLMs.

The specific circumstances are well known. John Marshall, an employee at Sutter’s newly constructed mill, located on the previously unexplored upper elevations of the south fork of the American River, found gold flakes in the water in January 1848. Marshall did not keep it secret; the news was out within weeks. In other words, the first two ingredients – discovery and information spreading –happened within a month of each other.

What about impatient economic actors? Gold miners overran Sutter’s Mill that spring. By the following year’s snowmelt, the area was packed with miners who believed they needed to stake a claim as soon as possible. Many panned for gold in the rivers. They were called “49ers.”

Sadly, though the Western slopes of the Sierra Nevada range contain some of the best deposits of gold in the world, most of those deposits do not have rivers passing over them. Most of those who panned for gold did not recover much and soon gave up.

Old-fashioned digging into the earth, which followed years after the 49ers, yielded much better rewards. This activity was expensive and slow, requiring the financing of teams of miners, expertise in building mines, and experience in the geology of locating gold seams, and continued for many years. 

That leads to a third observation. Levi Strauss got his start in 1853 by providing rugged jeans made of canvas to those digging miners. Many equipment providers grew businesses, often summarized as “picks and shovels” suppliers. Many did well.

Three economic lessons emerge from that illustration. First, few of the first miners profited. Second, because the discovery indicated a more considerable opportunity, many later firms who exploited the opportunity did well through traditional means. That just took a while. Third, it is unnecessary to be on the front lines of the opportunity to profit from it. Suppliers of essential equipment can make out well if a sustainable business grows later.

The surprise

Let’s turn back to LLMs. Like any gold rush, the development of LLMs contained a significant element of surprise. Though there had been many efforts to develop large language models, the release of ChatGPT3.5 in November of 2022 demonstrated autocompletion capabilities that surprised observers.

Recognize the contrast with other technologies with long lead times, such as 5G. Years in the making, the timing of its arrival surprised no one, nor did its capabilities, nor has the speed with which users have taken it up. The 5G experience is more typical.

There is a simple way to measure the surprise and subsequent prominence of ChatGPT3.5. Look at Google Trends, which provides a measure of the frequency of web searches for a topic. The graph here goes from September 2022 to September 2023.

The graph compares the word “ChatGPT” (in blue) with OpenAI (in yellow). The chart shows ChatGPT and OpenAI began to become the objects of search around the first week of November 2022. Though both start simultaneously, ChatGPT (the brand) eventually gains much more attention than OpenAI (the creator) and sustains that interest. In short, the service was far more interesting than the organization sponsoring it.

Google’s Bard (in red) comes later but never gathers as much attention. While symptomatic of mindshare, not market share, this should alarm Google’s management. ChatGPT appears to have a persistent lead. 

You should ask about the scale. As with all Google Trends, the y-axis is scale-free, but the different topics are displayed proportionately. How should we think about this level of mindshare? The following two lines offer a benchmark, comparing ChatGPT to other popular topics, Elon Musk (in green) and Taylor Swift (in purple).

Musk has had his moments but gets far less interest than ChatGPT. Taylor Swift differs. Nobody beats her for publicity, especially while she had a national concert tour. ChatGPT approaches her, which seems like quite a lot to me. (Not shown, I checked terms such as Stable Diffusion, Dalle-2, Hugging Face, Machine Learning, and other AI-related phrases. ChatGPT gets more attention than anything else.)

Here is the point: a noticeable change in awareness occurred in November 2022. That reflects the surprise. Moreover, ChatGPT sustained that level of understanding. That reflects the spreading of news and high sustained interest.

Gold rush economics

If the gold rush metaphor suggests any lessons, it means being wary of fools who rush in. Instead, look at those firms who thoughtfully seek to take advantage of the opportunity. Typically, those successes do not happen overnight.

Second, the same metaphor suggests that if the opportunity persists, an expensive supply chain will emerge to support firms using LLMs. ChatGPT3.5 did require access to several billions of dollars of equipment, which OpenAI gained from its partnership with Microsoft. At present, the gold mine is expensive to build.

Relatedly, a big debate has emerged around costs. Some experts foresee an endless arms race for more resources to produce more breakthroughs. In contrast, others expect the costs to come down as applications refine existing models or advanced models develop APIs. The former could lead to only a small number of firms with frontier models, while the latter could lead to the fast spread of cheaper applications. (I lean towards the latter view.)

As for the third observation, which firms are well-positioned to sell the modern equivalent of picks and shovels? That pushes towards organizations like Nvidia, TSMC, or the cloud providers.  Unsurprisingly, this insight has occurred to many stock traders. For example, Nvidia’s stock price has increased four times in value since November 2022.

We can go further by being more precise about how the supply chain will likely change. For example, OpenAI designed ChatGPT as one generic model, using public data from a fixed date, untailored to a specific use case, and unintegrated into an application. Any analysis of the picks and shovels of LLMs should expect those attributes to change and enable new possibilities.

Consider the use of real-time data. Google has the best access to new information of any firm. This is why many analysts expect it to put forward something that trains an LLM on recent data. So far, Bard has yet to reach that ideal place, but many investors are betting on whether it will.

It is also easy to imagine combining the autocompletion capabilities with a specific data set – for example, an inventory of items for sale. That could radically improve electronic retailing. Imagine asking for a black dress in natural language, without keywords, and going back and forth with the site to refine the search to get a stylish dress for a night out, not a funeral. That type of experience would improve a vast number of electronic retailing sites.

To me, the most exciting applications are those that mix voice recognition and conversation. Many settings, such as your car, would improve if a driver could talk with a car instead of pushing buttons. Again, that ideal has been discussed for years, but many automobile firms are now building prototypes for development. The gains with ChatGPT put that ideal within reach.

What else?

Another popular analysis starts with the people in supply chains instead of equipment. It examines occupations, such as litigation lawyers or coders, and forecasts impact based on whether an occupation benefits from the deployment of LLMs.

Consider coding. Copilot, available in GitHub, was the first LLM to autocomplete coding, speeding up coding in any popular language. Many versions are now available. That enhances programmers’ productivity, making them faster at their jobs, which could raise wages for coders in valuable areas. These forecasts help organize human resource plans over the medium term, especially in organizations that hire many coders.

The next question is more challenging. Which organizations employing more productive employees will benefit? There are many steps between increased productivity for one class of workers and an entire organization’s value. For example, more productive coders make data scientists more valuable, but how will that change an organization? It is not as obvious. Organizations that employ many data scientists vary in their ability to manage this type of disruption. In other words, one should be wary of making stock market investments with this insight without more complementary evidence that an organization knows how to use it intelligently.


LLMs always train on past data, so they have strengths in areas where today’s tasks resemble yesterday’s. The forecasts are weaker outside those areas.

Still, that continuity enables some useful analysis. Metaphors, such as gold rushes, help identify some mechanisms at work in those settings. That said, metaphors alone are insufficient. Forecasting in markets still requires deep domain knowledge and refined economic intuition.

Published in IEEE Micro

December, 2023.