October 24, 2025

Microsoft AI Boss Says Tech Must Serve People, Not Scare Them

Microsoft AI CEO Mustafa Suleyman Warns Trust in Tech Is Low — He Says AI Must Be Built for Humans, Not as Digital Humans

In the quiet hum of servers and the buzz of innovation at Microsoft, a surprisingly human message is rising. Mustafa Suleyman, the CEO of Microsoft’s AI division, is asking something simple but profound: “Can we build technology that serves people, not overwhelms them?” His words carry weight now because the world is moving fast—and not always in a way that feels safe or comfortable.

Suleyman’s view isn’t what you might expect from a tech leader chasing growth. He’s blunt about the fact that trust in technology is low. “We are creating AIs that are emotionally intelligent, that are kind and supportive, but that are fundamentally trustworthy,” he said in an interview. He argues that instead of building clever tools just because we can, we need to ask whether these tools truly help. He puts it simply: technology “should work in service of people — not the other way around.”

What does that look like in practice? For Microsoft’s flagship AI, known as Copilot, the answer lies in small but meaningful design choices. The system is being developed to be helpful, not intrusive. It’s trained to direct users toward human experts when needed—like when a medical question comes up—rather than pretend to already be the expert. Suleyman says that’s intentional: “It’s not about replacing people—it’s about amplifying what people do best.”

He also draws a firm line in the sand when it comes to limits. When asked about features like AI-generated erotica or “buddy” chatbots that mimic human intimacy, Suleyman was clear: Microsoft won’t go there. He called such directions “dangerous” and “misguided,” saying that treating machines like human companions can lead to real harm.

The reason he’s so focused on these issues isn’t showmanship—it’s rooted in experience. Suleyman co-founded DeepMind and later launched another AI venture before joining Microsoft, so his warnings aren’t coming from the sidelines. He knows what it takes for groundbreaking tech to scale—and what happens when society isn’t ready. In a recent piece, he described a growing concern about what he calls “AI psychosis,” where people may over-invest trust in systems not built for that kind of role.

But the tone he uses is not doom. He speaks of possibility and connection. He imagines a future where our tools are not just faster or smarter, but kinder and more aligned with our values. He talks about AI that helps students learn in ways that suit them, tools that help professionals explore creative ideas rather than automating them, and systems that support mental health instead of making it worse. It’s a future where technology assists—not dominates.

For anyone who has ever felt uneasy with the pace of change, Suleyman’s approach offers a different path. It’s a reminder that innovation doesn’t need to outpace our humanity—it should be guided by it. In a world where headlines warn of tech run amok, his message stands out: trust is not automatic, it’s earned. And the first step is building tools that look you in the eye—and say, “I’m here to help.”