Business US

Why the Second Wave of AI Will Mint More Millionaires Than the First — and the Stocks to Own

The first wave of artificial intelligence (AI) undoubtedly created some millionaires, but it was pretty concentrated in a few stocks. Nvidia was far and away the biggest winner, as its graphics processing units (GPUs) became the de facto chips for AI model training. With a 90% market share, it dominated the hardware space.

However, the market is shifting toward inference and agentic AI. Nvidia doesn’t have nearly as wide a moat in these areas, so there likely will be more opportunities for more AI millionaires, as the second phase of AI looks like it will be much more broad based.

Let’s look at three AI stocks to own for the second phase of AI.

AMD: An inference and agentic AI winner

Today’s Change

(-0.67%) $-2.99

Current Price

$445.30

Key Data Points

Market Cap

$726B

Day’s Range

$432.70 – $460.01

52wk Range

$107.67 – $469.21

Volume

3.4K

Avg Vol

39M

Gross Margin

47.09%

Advanced Micro Devices (AMD 0.67%) is one of the companies in the best position for the age of inference and AI agents. It had long found a niche with its GPUs for inference, and its modular chiplet design, which allows for more memory capacity, positions it well in this area. Meanwhile, it has partnerships and large commitments from OpenAI and Meta Platforms for its next generation of GPUs for inference, which should drive strong growth in the coming years.

Equally exciting, though, is the company’s opportunity in the data center central processing unit (CPU) market. With the rise of agentic AI, the ratio of GPUs to CPUs in AI servers is expected to move from 8:1 to 1:1. AMD is already the leader in this space, and with demand expected to outpace supply, it has a huge growth opportunity in front of it.

Meanwhile, following its acquisition of ZT Systems, it can now offer complete AI racks designed specifically for tasks such as inference or agentic AI, opening up another growth driver for the company.

Broadcom: The custom chip winner

Today’s Change

(-0.60%) $-2.53

Current Price

$416.77

Key Data Points

Market Cap

$2.0T

Day’s Range

$404.80 – $418.63

52wk Range

$221.60 – $437.68

Volume

857

Avg Vol

24M

Gross Margin

64.96%

Dividend Yield

0.60%

As the market moves more toward inference, more and more hyperscalers (owners of large data centers) are looking to turn to custom AI ASICs (application-specific integrated circuits) to save costs. AI ASICs are custom chips that are hardwired for a specific purpose, and as such, they tend not only to have high performance, but they also tend to be more energy efficient. And, as the leader in ASIC technology, hyperscalers are increasingly turning to Broadcom (AVGO 0.60%) for help developing these chips.

Broadcom helped co-develop Alphabet‘s highly successful tensor processing units (TPUs), and it continues to profit from Alphabet’s deployment of these chips, as well as the company beginning to let some of its largest customers directly place orders with Broadcom. This includes Anthropic, which has already placed a $21 billion order for this year and has made commitments for more chips in the future. Meanwhile, other hyperscalers are also beginning to ramp up production of their own custom chips.

Overall, Broadcom expects to see more than $100 billion in ASIC chips alone in fiscal 2027. Meanwhile, the company also has a very fast-growing data center networking business. As such, it is set to see explosive growth in the coming years.

Image source: Getty Images.

Micron: A memory leader

Today’s Change

(4.72%) $36.20

Current Price

$802.78

Key Data Points

Market Cap

$906B

Day’s Range

$779.47 – $814.86

52wk Range

$90.93 – $818.67

Volume

10K

Avg Vol

44M

Gross Margin

58.54%

Dividend Yield

0.06%

One of the biggest bottlenecks in the AI infrastructure market right now is DRAM (dynamic random access memory), and the rise of inference and agentic AI is only likely to intensify that dynamic. Inference tends to be more memory-bound than compute-bound, as when an AI model responds, it must access its key-value (KV) cache, which is typically stored in the high-bandwidth memory (HBM) attached to an AI chip. Meanwhile, AI agents need to retain and process more information as they carry out multistep tasks, creating even greater demand for memory.

This is good news for Micron (MU +4.72%), one of the big three DRAM memory makers, along with Korean companies SK Hynix and Samsung. While the DRAM market has historically been highly cyclical, the demand for HBM is directly attached to GPUs and high-performance data center CPUs, giving it a huge secular tailwind. Meanwhile, the DRAM makers have begun signing longer-term commitments, which should improve visibility and reduce some of the industry’s traditional volatility.

With Micron trading at a forward price-to-earnings (P/E) of below 8x, it still has plenty of room to run as its growth continues to skyrocket.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button