There’s endless conversation these days about the “existential risks” of building a super-intelligent AI. Unfortunately, the dialog often jumps over many dangers and lands instead on movie plots like Wargames (1983), in which an AI nearly triggers a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an AI weapons system becomes sentient and turns against us with an army of red-eyed robots.
Both are terrific movies, but are these the real risks we face? Sure, an accidental nuclear launch or weapons gone rogue are real threats, but military leaders already take those risks seriously. On the other hand, I contend that an artificial superintelligence (ASI) could easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, an ASI would simply manipulate humanity to meet its own interests.
I know that sounds like another movie plot, but Big Tech is aggressively developing AI systems with the ability to influence society at scale. This isn’t a bug in their design — it’s a goal. That’s because many of the largest corporations deploying AI systems have core business models that involve selling targeted influence. They call it marketing, but when powered by AI to characterize individuals, segment populations, and deploy content aimed at influencing specific groups, it becomes a polarizing force in society.
Fallout of Unregulated Social Media
We’ve all seen the damage this can cause thanks to years of unregulated social media and yet traditional targeting will soon look primitive. In the near future, corporations will be able to target users on an individual-by-individual basis through personalized interactive conversations. This will be deployed using artificial agents that draw us into friendly conversation and casually deploy customized influence that is maximized for our own unique views, interests, personality and backgrounds.
Now consider this — a few weeks ago Meta unveiled AI-powered chatbots on Facebook, Instagram, and WhatsApp through partnerships with “cultural icons and influencers” including Snoop Dogg, Kendall Jenner, Tom Brady, Chris Paul, and Paris Hilton. The technology allows users to hold friendly conversations with simulations of famous people they admire. We can easily imagine such techniques will soon become a new frontier for targeted influence, deployed as personal conversations with celebrity faces. It will begin as text chat and voice chat but soon evolve into realistic avatars.
Why is this so dangerous?
As I often tell policymakers — think about a skilled salesperson. They know that the best way to sway someone’s opinion is not to hand them a brochure or play them a video. It’s to engage them in conversation, subtly expressing a sales pitch, hearing the customers objections and concerns, and actively working to overcome those barriers. AI systems are now ready to engage individuals this way, performing all steps of the process, and as I detail in this recent academic paper — we humans will be thoroughly outmatched.
That’s because these AI systems will be far more prepared to target you than any salesperson. They could have access to data about your interests, hobbies, political leanings, personality traits, education, and countless other personal details. And soon, they will be able to read your emotions in your vocal inflections, facial expressions, eye motions, and posture.
You, on the other hand, will be talking to an AI that can look like anything from Paris Hilton and Snoop Dogg, to a cute little fairy that guides you through your day. And yet that cute or charming AI could have all the world’s information at its disposal to guide your thinking and counter your objections, while also being trained on sales tactics, cognitive psychology, and strategies of persuasion. And worse, it could just as easily sell you a car as it could convince you of misinformation or propaganda. This risk is called the AI Manipulation Problem and most regulators still fail to appreciate the subtle danger of interactive conversational influence.
Underestimating the Dangers Ahead
Now let’s look a little further into the future and consider the magnitude of the manipulation risk as AI systems achieve superintelligence. Will we regret allowing the largest companies in the world to normalize the deployment of AI agents that look human, act human, and sound human, but are NOT human in any real sense of the word, and yet can skillfully pursue human-like tactics that can manipulate our beliefs, influence our opinions, and sway our actions? I think so.
After all, a superintelligence, by definition, will be an AI system that is significantly smarter than any human. And if such an AI system goes rogue, it will not need to take control over our nukes or military drones. It will just need to use the tactics that Big Tech is currently developing — the ability to deploy personalized AI agents that seem so friendly and nonthreatening that we let down our guard, allowing them to whisper in our ears while reading our emotions, predicting our actions, and potentially manipulating our behavior with super-human skill.
This is a real threat and yet we’re not acting like it’s rapidly approaching. In fact, we are deeply underestimating the risk, largely because of the personification techniques described above. Today’s AI agents are already so good at pretending to be human, even by text chat, we’re trusting them more than we should. So when these powerful AI systems eventually appear to us as Snoop Dogg or Paris Hilton or some new fictional persona that’s friendly and charming, we will only let down our guard even more.
How can we get people to appreciate the magnitude of this risk?
Over the last decade, I found that an effective way to contextualize the risks is to compare the creation of a sentient superintelligence with the arrival of an alien spaceship. I refer to this as the Arrival Mind Paradox because the creation of a superintelligent AI here on earth is arguably more dangerous than intelligent aliens arriving from another planet. And yet, with AI now advancing at a record pace, we humans are not acting like we just looked into a telescope and spotted a fleet of ships racing towards us.
So, let’s compare the relative risks. If an alien spaceship was spotted heading towards earth and moving at a speed that made it likely to arrive within the next ten years, the world would be sharply focused on the approaching entity — hoping for the best, but undoubtedly readying our defenses. Some would argue that the intelligent species will come in peace, but most would demand that we prepare for a full-scale invasion.
On the other hand, we have already looked into a telescope that’s pointing back at ourselves and have spotted a superintelligence headed for earth. Yes, it will be our own creation, but it will not be human in any way. And let’s be clear — the fact that we’re teaching this intelligence to be good at pretending to be human does not make it less alien.
This arriving mind will be profoundly different from us in almost every way, and we have no reason to believe it will possess humanlike values, morals, or sensibilities. And by teaching it to speak our languages, read our emotions, write our programming code, and integrate with our computing networks, we are making it a more dangerous threat than an alien from afar.
Still, we don’t fear the arrival of this alien AI — not in the visceral, stomach turning way that we would fear a mysterious ship headed for earth. That’s the Arrival Mind Paradox — the fact that we fear the arrival of the wrong aliens and will likely do so until it’s too late to prepare. And if this alien AI shows up looking like Paris Hilton or Snoop Dogg or countless other familiar faces, and speaks to each of us in ways that individually appeal to our personalities and backgrounds, what chance do we have to resist?
Yes, we should secure our nukes and drones — but we need to have just as aggressive protections against the widespread deployment of personified AI agents. It’s a real threat and we are not prepared.
This post is written by FMI Advisory Board Member Louis Rosenberg.
Louis Rosenberg, PhD, is a longtime technologist in the fields of artificial intelligence, augmented reality, and human computer interaction. He is the founder of Immersion Corporation and currently CEO of Unanimous AI.
Connect with FMI
Want to stay ahead of the marketing wave and prepare yourself for the future? Connect with us on our Website, LinkedIn, Instagram or Twitter/X.
If you enjoyed this newsletter, please share it and subscribe to receive it directly in your inbox.