I’ve been a technologist for over thirty years and yet there have only been three times during my career when I was certain the world was undergoing a technological revolution. The first was the PC Revolution which began in the late 1970’s and took over a decade to change society. Then came the Internet Revolution, which began in the late 80’s and took a little less than a decade to transform society. And then came the Mobile Revolution which began on January 9th, 2007 – the literal “iPhone moment” when Steve Jobs first introduced the smartphone to the world. Within only six years, smartphones outnumbered flip phones marking a shockingly rapid transformation.
We are now at the start of a new computing revolution – the AI Revolution, and there are three things I know for sure. First, it will be just as significant as the prior revolutions above, transforming society at all levels. Second, It will happen significantly faster than previous revolutions. The launch of ChatGPT will go down in history as the “iPhone moment” of this revolution and the broad impacts on society will be felt in 1 to 3 years, not a full decade. And third, the world is still underestimating the impact that generative AI will have on society. Most view this technology as a means of creating human-quality content at scale. This is true, but generative AI is much more than that – it’s an entirely new form of media that we’ve never confronted before and could be far more dangerous than current media.
When it comes to the risks of generative AI, a flood of mainstream articles have been written and yet everything that I’ve seen boils down to three simple arguments, none of which reflect the biggest danger I see coming our way. Before I get into this overlooked threat of generative AI, it is helpful to summarize the most common warnings that have been raised by safety experts:
- Risk to Jobs – Generative AI can now produce human-level work products ranging from artwork and essays to scientific reports. This will greatly impact the job market but I believe it is a manageable risk as job definitions adapt to the power of AI. It will be painful for a period, but not dissimilar from how previous generations adapted to other work-saving efficiencies.
- Risk of Fake Content – Generative AI can now create human quality artifacts at scale, including fake and misleading articles, essays, papers, and video. Misinformation is not a new problem, but generative AI will allow it to be mass-produced at levels never before seen. This is a major risk, but manageable. That’s because fake content can be made identifiable by either (a) mandating watermarking technologies that identify AI content upon creation, or (b) by deploying AI-based countermeasures that are trained to identify AI content after the fact.
- Risk of Sentient Machines – Many researchers worry that AI systems will get scaled up to a level where they develop a “will of their own” and will take actions that conflict with human interests, or even threaten human existence. I believe this is a genuine long-term risk. In fact, I wrote a “picture book for adults” entitled Arrival Mind a few years ago that explores this danger in simple terms. Still, I do not believe that current AI systems will spontaneously become sentient without major structural improvements to the technology. So, while this is a real danger for the industry to focus on, it’s not the most urgent risk that I see before us.
So, what concerns me most about the rise of Generative AI?
From my perspective, the place where most safety experts go wrong, including policymakers, is that they view generative AI primarily as a tool for creating traditional content at scale. While the technology is quite skilled at cranking out articles, images, and videos, the more important issue is that generative AI will unleash an entirely new form of media that is highly personalized, fully interactive, and potentially far more manipulative than any form of targeted content we have faced to date.
Welcome to the age of Interactive Generative Media
The most dangerous feature of generative AI is not that it can crank out fake articles and videos at scale, but that it can produce interactive and adaptive content that is customized for individual users to maximize persuasive impact. In this context, Interactive Generative Media can be defined as targeted promotional material that is created or modified in real-time to maximize influence objectives based on personal data about the receiving user. This will transform “targeted influence campaigns” from buckshot aimed at broad demographic groups to heat-seeking missiles that can zero-in on individual persons for optimal effect. And as described below, this new form of media is likely to come in two powerful flavors, “Targeted Generative Advertising” and “Targeted Conversational Influence.”
Targeted Generative Advertising is the use of images, videos, and other forms of informational content that look and feel like traditional ads but are personalized in real-time for individual users. These ads will be created on the fly by Generative AI systems based on influence objectives provided by third-party sponsors in combination with personal data accessed for the specific user being targeted. The personal data may include the user’s age, gender, and education-level, combined with their interests, values, aesthetic sensibilities, purchasing tendencies, political affiliations and cultural biases.
In response to the influence objectives and targeting data, the Generative AI will customize the layout, feature images, and promotional messaging to maximize effectiveness on that user. Everything down to the colors, fonts, and punctuation could be personalized along with age, race, and clothing styles of any people shown in the imagery. Will you see video clips of urban scenes or rural scenes? Will it be set in the fall or spring? Will you see images of sports cars or family vans? Every detail could be customized in real-time by generative AI to maximize the subtle impact on you personally.
And because tech platforms can track user engagement, the system will learn which tactics work best on you over time, discovering the hair colors and facial expressions that best draw your attention. If this seems like science fiction, consider this – both Meta and Google have recently announced plans to use Generative AI in the creation of online ads. If these tactics produce more clicks for sponsors, they will become standard practice and an arms race will follow, with all major platforms competing to use generative AI to customize promotional content in the most effective ways possible.
This brings me to Targeted Conversational Influence, which is a generative technique in which influence objectives are conveyed through interactive conversation rather than traditional documents or videos1. The conversations will occur through chatbots (like ChatGPT and Bard) or through voice-based systems powered by similar Large Language Models (LLMs). Users will encounter these “Conversational Agents” many times throughout their day, as third party developers will use APIs to integrate LLMs into their websites, apps, and interactive digital assistants. For example, you might access a website looking for the latest weather forecast, engaging conversationally with an AI Agent to request the information. In the process, you could be targeted with conversational influence – subtle messaging woven into the dialog with promotional goals.
As conversational computing becomes commonplace in our lives, the risk of conversational influence will greatly expand, as paying sponsors could inject messaging into the dialog that we may not even notice. And like Targeted Generative Ads, the messaging goals requested by sponsors will be used in combination with personal data about the targeted user to optimize impact2. The data could include the user’s age, gender, and education-level combined with personal interests, hobbies, values, etc… thereby enabling real-time generative dialog that is designed to optimally appeal to that specific person.
Why use conversational influence?
If you’ve ever worked as a salesperson, you probably know that best way to persuade a customer is not to hand them a brochure, but to engage them in face-to-face dialog so you can pitch them on the product, hear their reservations, and adjust your arguments as needed. It’s a cyclic process of pitching and adjusting that can “talk them into” a purchase. While this has been a purely human skill in the past, generative AI can now perform these steps, but with greater skill and a deeper knowledge to draw upon.
And while a human salesperson has only one persona, these AI Agents will be digital chameleons that can adopt any speaking style from nerdy or folksy, to suave or hip, and can pursue any sales tactic from befriending the customer to exploiting their fear of missing out. And because these AI Agents will be armed with personal data, they could mention the right musical artists or sports teams to ease you into friendly dialog. In addition, tech platforms could document how well prior conversations worked to persuade you, learning what tactics are most effective on you personally. Do you respond to logical appeals or emotional arguments? Do you seek the biggest bargain or highest quality? Are you swayed by time-pressure discounts or free add-ons? Platforms will quickly learn to pull all your strings.
Of course, the big threat to society is not the optimized ability to sell you a pair of pants. The real danger is that the same techniques will be used to drive propaganda and misinformation, talking you into false beliefs or extreme ideologies that you might otherwise reject. A conversational agent, for example, could be directed to convince you that a perfectly safe medication is a dangerous plot against society. And because AI agents will have access to an internet full of information, it could cherry-pick evidence in ways that would overwhelm even the most knowledgeable human. This creates an asymmetric power-balance often called the AI Manipulation Problem in which we humans are at an extreme disadvantage, conversing with artificial agents that are highly skilled at appealing to us, while we have no ability “to read” the true intentions of the entities we’re talking to.
Unless regulated, Targeted Generative Ads and Targeted Conversational Influence will be powerful forms of persuasion in which users are outmatched by an opaque digital chameleon that gives off no insights into its thinking process but are armed with extensive data about our personal likes, wants and tendencies, and has access to unlimited information to fuel its arguments. For these reasons I urge regulators, policymakers, and industry leaders to focus on generative AI as a new form of media that is interactive, adaptive, personalized, and deployable at scale. Without meaningful protections, consumers could be exposed to predatory practices that range from subtle coercion to outright manipulation.
References
- Rosenberg, Louis. “The Metaverse and Conversational AI as a Threat Vector for Targeted Influence.” 2023 IEEE 13th Annual Computing and Communication Workshop and Conf. (CCWC), 2023. 10.1109/ccwc57344.2023.10099167.
- Rosenberg, Louis. The Manipulation Problem: Conversational AI as a Threat to Epistemic Agency. 2023 CHI Workshop on Generative AI and HCI (GenAICHI 2023). Association for Computing Machinery, Hamburg Germany (April 28, 2023)
Louis Rosenberg, PhD is an early pioneer in the fields of VR, AR, and AI, and the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research, and Unanimous AI. He is currently CEO of Unanimous AI, Chief Scientist of the Responsible Metaverse Alliance, and Global Technology advisor to the XR Safety Initiative. He also writes often for major publications such as VentureBeat and Big Think.