AI as a "New Digital Species": Thoughts on Microsoft AI Chief's TED Talk

AI as a "New Digital Species": Thoughts on Microsoft AI Chief's TED Talk

In a thought-provoking talk at TED, Microsoft AI chief executive Mustafa Suleyman painted a compelling vision of artificial intelligence as a "new kind of digital species" - a framing that, while evocative, is likely premature and may not stand up to closer scrutiny. First, if you haven't yet seen it, take 18 minutes to watch it below:

Suleyman's central argument is that AI has advanced to such a degree that it can no longer be considered a mere tool, but rather a new form of being that will come to permeate every aspect of our lives. He envisions a future where AI companions are ubiquitous, representing not just individuals but organizations, cities, and objects. These AI entities, he suggests, will be infinitely knowledgeable, factually accurate, and reliable - a digital manifestation of the best qualities of human nature.

For years, we in the AI community, and I specifically, have had a tendency to refer to this as just tools. But that doesn't really capture what's actually happening here.
AIs are clearly more dynamic, more ambiguous, more integrated and more emergent than mere tools, which are entirely subject to human control.
So to contain this wave, to put human agency at its center and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species.
Now it's just an analogy, it's not a literal description, and it's not perfect.
For a start, they clearly aren't biological in any traditional sense, but just pause for a moment and really think about what they already do. They communicate in our languages. They see what we see. They consume unimaginably large amounts of information. They have memory. They have personality. They have creativity. They can even reason to some extent and formulate rudimentary plans. They can act autonomously if we allow them.
And they do all this at levels of sophistication that is far beyond anything that we've ever known from a mere tool. And so saying AI is mainly about the math or the code is like saying we humans are mainly about carbon and water. It's true, but it completely misses the point.
And yes, I get it, this is a super arresting thought but I honestly think this frame helps sharpen our focus on the critical issues.

It's a seductive vision, to be sure. The idea of having a perfectly rational, endlessly informed AI partner at our beck and call is undeniably appealing. And Suleyman is right to point out that AI has made remarkable strides in recent years, from mastering complex games to engaging in creative and conversational feats that would have seemed impossible just a decade ago.

However, the "digital species" metaphor honestly feels overextended when applied to the current state of AI. It evokes a sense of autonomy and self-determination that current AI lacks, and that Suleyman seems very hesitant to encourage. Today’s AI systems, no matter how advanced, operate within a framework meticulously crafted by humans. They do not possess independent desires and motivations; their ‘actions’ are responses to vast arrays of data, processed through algorithms designed, adjusted, and fine-tuned by developers.

They are, at their core, tools - extraordinarily sophisticated and powerful tools, but tools nonetheless.

Still, Suleyman’s framing is useful, though, when considered as a provocative tool meant to shift our paradigm of what AI could become, rather than a literal description. It pushes us to think more critically about the ethical framework and control mechanisms necessary as AI technologies become more advanced and integrated into everyday human activities. Is "digital species" the right metaphor? I don't know. How do you feel about limiting another species and keeping it under your control? But, I digress.

Another place where I think Suleyman's rosy vision of AI falls short is in his characterization of its foundations.

In the past, unlocking economic growth often came with huge downsides.
The economy expanded as people discovered new continents and opened up new frontiers. But they colonized populations at the same time.
We built factories, but they were grim and dangerous places to work.
We struck oil, but we polluted the planet.
Now because we are still designing and building AI, we have the potential and opportunity to do it better, radically better. And today, we're not discovering a new continent and plundering its resources. We're building one from scratch.

The problem is that Suleyman glosses over the many thorny ethical and societal issues that AI technology raises. Today's AI models have been trained on vast troves of data, including art, literature, and music, often obtained with questionable consent or compensation from the original creators. There are serious and legitimate concerns about data privacy, ownership rights, and fair compensation for those whose work has been used to train these models.

Moreover, the potential for AI to disrupt and displace the livelihoods of the very people who provided the data on which it was trained cannot be ignored. If we are to embrace AI as our future, we must also include conversations about access, equity, and ensuring that the benefits of this technology are distributed fairly. Ultimately, "unlocking economic growth" with AI is likely to be messy and fraught with difficult choices just like previous periods in our histroy . 

Now, none of this is to say that AI isn't a transformative technology with immense potential for good. The possible applications in areas like healthcare, education, and sustainability are indeed exciting. But we need to approach the development and deployment of AI with clear eyes and a commitment to addressing its challenges and pitfalls.

Let me wrap up by stressing that I do not think this TED talk represents Suleyman's complete perspective on AI. TED is a specific stage and there is only so much that you can cover in 18 minutes. In fact, if you haven't already done so, you should definitely check out his book, "The Coming Wave".

I do think he is absolutely right that we need a robust public dialogue about the future we want to build with AI. Suleyman's talk, and the critical responses like mine that it invites, are all part of this essential discourse.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.