Business US

Stanford AI Experts Predict What Will Happen in 2026

Deflating the AI Bubble

Angèle Christin, Associate Professor of Communication and Stanford HAI Senior Fellow

The billboards in San Francisco say it all: AI everywhere!!! For everything!!! All the time!!! The slightly manic tone of these advertisements gives a sense of the hopes – and immense investments – placed in generative AI and AI agents. 

So far, financial markets and big tech companies have doubled down on AI, spending massive amounts of money and human capital, and building gargantuan computing infrastructures to sustain AI growth and development. Yet already there are signs that AI may not accomplish everything we hope it will. There are also hints that AI, in some cases, can misdirect, deskill, and harm people. And there is data showing that the current buildout of AI comes with tremendous environmental costs.

I expect that we will see more realism about what we can expect from AI. AI is a fantastic tool for some tasks and processes; it is a problematic one for others (hello, students generating final essays without doing the readings!). In many cases, the impact of AI is likely to be moderate: some efficiency and creativity gain here, some extra labor and tedium there. I am particularly excited to see more fine-grained empirical studies of what AI does and what it cannot do. This is not necessarily the bubble popping, but the bubble might not be getting much bigger.

A “ChatGPT Moment” for AI in Medicine

Curtis Langlotz, Professor of Radiology, of Medicine, and of Biomedical Data Science, Senior Associate Vice Provost for Research, and Stanford HAI Senior Fellow

Until recently, developing medical AI models was extremely expensive, requiring training data labeled by well-paid medical experts (for example, labeling a mammogram as either benign or malignant). New self-supervised machine learning methods, now widely used by the developers of commercial chatbots, don’t require labels and have dramatically reduced the cost of medical AI model training.

Medical AI researchers have been slower to assemble the massive datasets needed to capitalize on self-supervision because of the need to preserve the privacy of patient data. But self-supervised learning from somewhat smaller datasets has shown promise in radiology, pathology, ophthalmology, dermatology, oncology, cardiology, and many other areas of biomedicine. 

Many of us will remember the magic moment when we discovered the incredible capabilities of chatbots trained with self-supervision. We will soon see a similar “ChatGPT moment” for AI in medicine, when AI models are trained on massive high-quality healthcare data rivaling the scale of data used to train chatbots. These new biomedical foundation models will boost the accuracy of medical AI systems and will enable new tools that diagnose rare and uncommon diseases for which training datasets are scarce.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button