News US

UC San Diego alumna offers ways to be ‘robot-proof’ in the age of AI

Vivienne Ming, a UC San Diego alumna and longtime machine learning researcher, argues in her new book, “Robot-Proof,” that the real risk of artificial intelligence isn’t just job displacement — it’s people using AI passively and losing the skills that make them valuable.

Ming, a theoretical neuroscientist, inventor and entrepreneur who graduated from UCSD in 2000 with a bachelor’s degree in cognitive science, offers a playbook for becoming “robot-proof” by using AI inquisitively, challenging it and building the human strengths that AI can’t replicate.

“We should be careful that what we’re building doesn’t automate away the very capacities that make us human,” said Ming, who also has advanced degrees in psychology from Carnegie Mellon University. She founded Berkeley-based web application developer Socos Labs in 2011 and is chief scientist for Possibility Sciences, a group that works to narrow what it calls the “possibility gap — the distance between what we can imagine and what our systems can reliably activate.”

As AI tools have become part of daily life, many Americans say they’re more concerned than excited. And Ming agrees that there are a lot of reasons humans should be cautious of AI.

But she says one of her biggest concerns is that the machine learning industry is moving in the wrong direction, pouring resources into making AI smarter and more autonomous while neglecting the human side of the equation.

Here she discusses her book and how people can work with AI in a way that could best serve humanity.

Q. How is the AI revolution different from earlier technology-driven transformations — such as the Industrial Revolution, the microprocessor/personal computing revolution and the rise of the internet — that rapidly reshaped how people live and work?

A. I actually have a whole chapter titled “This is not the Industrial Revolution.” There’s this lazy analogy people reach for: “Oh, people complained about calculators, too,” and therefore this is all just the same cycle repeating. But it’s not a clean equivalence. Calculators didn’t stop you thinking — you stopped doing the low-level computation and then you did other things with those computations. That kind of tool still leaves your cognition engaged.

What’s different now is that modern agentic systems will happily do all of it. And the danger is that people start disengaging in a way we can actually measure. If you look at these technologies over time — printing, computers, the internet — we do see subtle changes in cognition.

But more recently, with GPS and algorithmic feeds, those changes are becoming measurable and, frankly, more concerning. People are changing how they think when they use these tools in ways that scare me.

So one big difference is that AI is hitting us right in our cognitive core. It’s not automating physical activity. It’s not even just automating a low-level cognitive task that’s deeply boring. It can automate the whole process — and that’s historically new. And that means we have to be far more thoughtful about what we automate vs. what we augment.

Q. You’ve said you wrote this book because today’s AI policy debates often miss what’s best for people. What are we getting wrong?

A. On one side you have the AI utopianists — the “wave the AI wand and everything gets perfect” crowd. “You’ll never have to work again. There’ll never be cancer.” It’s absurd. I call it the imagination disease: “I can imagine a world in which everything’s perfect, therefore it will be perfect.” And when you add trillions of dollars of investment pressure on top of that, it gets even worse because humans can’t deal with that kind of pressure.

Then on the other side you have the dystopian story: “AI is going to destroy us all, it’s going to take all the jobs, it’s going to ruin everything.” And I’ve been building this stuff for 30 years. I’ve used it for my son’s diabetes. I’ve used it for refugees. Bipolar disorder. Postpartum depression. Perimenopausal depression. I built literal cyborgs early on — using AI to help improve cochlear implants so people could hear speech in noise. So I don’t buy the simplistic dystopia either.

The problem is, almost nobody is talking about what I think is the most important frame. If you look at AI as an astonishing cognitive tool, the question becomes “How does it make human beings better?” And then, “What does that imply for education, workforce policy, infrastructure — everything?”

Q. In the book, you describe experiments you ran that were designed to find which type of people use AI most effectively. What did you discover?

A. We ran this experiment where small groups of students from UC Berkeley had an hour to make 10 predictions about the future. For example: What will the price of oil be in six months? Humans are terrible at that — unsurprisingly. We’re no good at making predictions about things we don’t know anything about. The smallest open-source model we used was better than the best human by a lot. And the bigger, more sophisticated the model, the better it did.

Then we looked at what I call hybrid intelligence — what happens when you put people and machines together. And we got two very different patterns. One group — what we called the “automators” — would basically say “Gemini, GPT, give me the answer and then submit it.” They’re not participating. I put an electroencephalogram, or EEG, on a couple of them and compared to people reasoning on their own — or even just using Google. There was dramatically less cognitive activity.

But then there was another group — about 10% of the Berkeley students — who became what we called “cyborgs.” They would push back: “What about this?” The AI would say “But the data …” and they’d say “OK, not that — what about this instead?”

There’s a back-and-forth where they actively explore why they might be wrong. They don’t just accept the answer. Those cyborg teams did better than the best people and they did better than the best models. In fact, three students with no prior knowledge performed comparably to prediction markets — like the kind where tens of thousands of people have money on the line. That’s genuinely exciting.

The catch is, it was a small percentage. Which means it’s not enough to say AI makes people better. We have to ask “What makes the cyborg pattern happen and how do we pull more people into it?”

Q. You’ve said it doesn’t matter much which AI model people use, it only matters how they use it. What does that imply for the AI industry, which is spending tons of capital to build better models?

A. That’s a huge deal, because right now, nearly every major company is optimizing for autonomy.

Read the model cards, read the benchmarks: it’s all about what the system can do by itself. AI optimized only for autonomy is a dead end for humanity. If the goal is to make people better, then we should be building systems designed around productive friction — systems that challenge you, that help you explore, that don’t just hand you the answer. But those systems would score worse on autonomy benchmarks by definition, because they’re not doing the work alone.

So from an industry standpoint, we’re measuring the wrong thing. We’re building toward the wrong end state. And we’re leaving the most valuable use case — the one that actually improves human capability — underdeveloped.

Q. You’ve said your biggest fear isn’t a sci-fi takeover, it’s a future in which people rely on AI passively too much. Can you explain?

A. Cognitive decline is a long-term phenomenon. It’s not like “Oh my God, my child asked AI for an answer and now they’re doomed.” This is more like a lifestyle issue. And it’s not wrong that people use tools in shallow ways sometimes — we didn’t evolve to be deep all the time. That would be exhausting.

The concern is what happens when shallow use becomes the default and there’s very little cognitive engagement. In our experiment, the “automators” were basically using AI as a substitute. They’d get an answer and submit it.

And you see it outside the lab, too — people scrolling, people consuming outputs, never really asking “Why do I believe this? What’s missing? What’s the alternative?”

So what does cognitive decline look like? It can look like disengagement. It can look like losing the habit of wrestling with uncertainty. It can look like becoming less able — or less willing — to check your own thinking. Over time, that’s a real loss.

Q. What does it look like to use AI constructively, to become a “cyborg” or an AI-powered human instead of an “automator” — one who depends on AI too much, which leads to cognitive decline?

A. The key is that it’s only when humans and machines are fundamentally working together — where the human challenges the AI and the AI challenges the human — that you get the dynamic that produces better outcomes than either alone.

We tried a simple intervention: We fine-tuned a small open-source model to not give answers. It would ask questions and push students instead. The students hated it. They were like, “Stop being Socrates — just tell me the price of oil!”

But twice as many of them switched into cyborg mode and achieved superhuman performance. That’s the hint: The goal isn’t comfort. The goal is productive friction. Use AI to challenge you, not just to reward your first thought.

A practical example is what I call the “Nemesis prompt.” I used it while writing. I didn’t let the AI write chapters. I wrote the chapter, then I’d say something like “You are my nemesis — my lifelong enemy. You’ve found every mistake I’ve ever made. Here’s the draft. Tell me why I’m wrong, in detail and how to make it better.”

Then you can flip it: “Now you’re a bored reader. Tell me why this doesn’t matter to you and how to make it connect without dumbing it down.”

That’s a very different relationship with the tool than “Give me the answer.”

Q. What advice do you have for parents and teachers who want to prepare kids to thrive in the age of AI?

A. One thing I say in the book is, our education system has largely been built around well-posed problems — problems where we already understand the question and we already know the answers, or the formula that gets you to the answer. Then we grade kids on how well they reproduce the “right” answers.

I don’t need that anymore. I have all those answers for free in my pocket — better, cheaper, faster than a human can give them. That doesn’t mean kids shouldn’t learn fundamentals; they’re still important. But the entire endeavor changes. What’s left is our ability to explore the unknown — the ill-posed problems. To do that, kids need to be willing to be wrong sometimes. They need curiosity. They need intellectual humility — the ability to hear “you’re wrong” and respond with interest instead of collapse. They need perspective-taking — understanding what other people think and what other people think about what you think.

Some of this is early-life development: rich conversation, reading, enriched environments, diverse experiences — these support working memory and the foundations of fluid intelligence. But after that, a lot of it becomes maintenance and practice. And you can do very concrete things. Reward questions, not just answers. Build a culture where asking is valued. Encourage productive failure.

Try a “failure diary” — not to glorify failure but to link mistakes to learning and progress. Help kids see errors as information. Then reinforce it daily: Use GPS to get around, but don’t surrender to it. Check the route and ask “Do I know better?” “Why this way?”

Keep the convenience and keep your brain online.

— La Jolla Light staff contributed to this article, which first appeared in the UC San Diego Today newsletter by UCSD Communications. It is republished here with permission.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button