AI at Davos 2026: From work impact to Europe’s place. Here’s what the tech leaders hope and fear

Artificial intelligence (AI) has infiltrated nearly every conversation at Davos 2026, rivalling the prominence of traditional hot-button issues such as trade tariffs, international competition, and geopolitical tensions.
Last year at Davos, the Chinese company DeepSeek sparked a frenzy when it launched its AI model and chatbot that the company claimed was cheaper and performed just as well as OpenAI’s rival ChatGPT model.
However, this year, discussions on AI have widened to include how it is being implemented, the risks the technology carries, and its impact on work and society.
Here are what the tech leaders have said at Davos.
‘Once in a lifetime opportunity for Europe’ – Jensen Huang, Nvidia
The founder and CEO of the chip giant Nvidia told the Davos forum that AI is “exciting for Europe” because it can it has an “incredibly strong manufacturing base” to build AI infrastructure.
Huang said that now was the era for Europe to “leapfrog” the software era and advised the continent to “get in early now” so it could fuse its manufacturing ability to build the AI infrastructure.
“Robotics is a once in lifetime op for European countries,” he added.
But he said that AI needs more energy, more land power and more trade skill workers, which he said was a strong work population in Europe.
He was also upbeat on how AI would impact the world of work. He said that instead of taking jobs, AI would create a lot more manual jobs.
“It’s wonderful that the jobs are related to trade craft – we’re going to have plumbers and electricians… all of these jobs, we’re seeing quite a significant boom and salaries have gone up. Nearly double,” he said.
“Everybody should be able to make a great living; you don’t need a PhD in computer science for this,” he added.
‘Do something useful’ – Satya Nadella, Microsoft
Microsoft’s chief executive officer, Satya Nadella, stressed that the use of AI is useful.
“We as a global community have to get to a point where we are using [AI] to do something useful that changes the outcomes of people and communities and countries and industries,” Nadella said.
Nadella warned that AI deployment will be unevenly distributed across the globe, constrained primarily by access to capital and infrastructure.
Realising AI’s potential requires “necessary conditions”—chiefly attracting investment and building supportive infrastructure, he said. While major tech companies are “investing all over, including the global south,” success hinges on policies that attract both public and private capital.
Critical infrastructure, such as electrical grids are “fundamentally driven by governments,” he said, and private companies can only operate effectively once basic systems such as energy and telecom networks are in place.
**’Not really human’ -**Yoshua Bengio
The Canadian computer scientist and one of the so-called ‘God-fathers of AI’, Yoshua Bengio, warned that today’s systems are trained too closely to be like humans.
“Many people interact with them with the false belief that they [AI] are like us. And the smarter we make them, the more it’s going to be like this, and there are people who make them want to look like us… But it’s not clear if it’s going to be good,” he said.
“Humanity has developed norms and psychology that interact with other people. But AIs are not really human,” he added.
‘The most intelligent entities on the planet can also be the most deluded’ – Yuval Harari
The popular science writer and philosopher warned against AI superintelligence, broadly defined as AI that surpasses human cognitive capabilities, saying that we have “no experience with building a hybrid human AI society” and called for humility and a “correction mechanism” should things go wrong.
He also said that human intelligence is “a ridiculous analogy” and that AI will never be like humans, just as aeroplanes are not birds. “The most intelligent entities on the planet can also be the most deluded,” he said.
‘Not selling chips to China is one of the biggest things we can do’ – Dario Amodei, Anthropic
The CEO and co-founder of Anthropic said that AI’s development is exciting, saying that we’re “knocking on the door of incredible capabilities”, but that the next few years will be critical for how we regulate and govern the technology.
The discussion was on what happens after artificial general intelligence (AGI), when AI matches or surpasses human cognitive capabilities, which humans could lose control of.
Almodei argued that “not selling chips to China is one of the biggest things we can do to make sure we have time to handle this,” he said, referring to AI getting out of control. He also told Bloomberg of “grave” consequences for the United States’ AI lead that it was now selling Nvidia’s H200 AI chips to China.
Almodei said that if “geopolitical adversaries building at a similar pace slow down,” then the real AI competition would be between him and other tech companies and not a battle between the US and China.
As for the future of work, Almodei famously said that AI could wipe out half of all entry-level white-collar jobs.
However, he said, while there was not a massive AI impact on the labour market right now, he is seeing some changes to the coding industry.
‘ More meaningful jobs created’ – Demis Hassibis , Google DeepMind
The CEO of Google’s DeepMind Technologies was more optimistic. While on the same panel as Almodei, he said he expected “new, more meaningful jobs being created.”
Hassibis said he thinks there will be a slowdown in internship hirings, but that this would be “compensated by the amazing tools out there for everyone.”
For undergraduates, he advised that instead of doing internships, they use the time to “get proficient in learning these tools,” which he said “could be better than traditional internships as you are leapfrogging yourself for the next five years”.
But he warned that after AGI arrives, the job market would be in “uncharted territory”.
Hassibis said this could be in five to 10 years, and could see not enough work for people, which poses bigger questions about meaning and purpose, not just salaries.
The CEO also pointed out that geopolitical and AI company competition meant that safety standards were being rushed. He called for an international understanding, such as a minimal safety standard, to be developed at a slightly slower pace so “we can get this right for society”.




