As a toddler you surely dropped a toy and watched it fall to the ground. Do that many times and your brain generalizes the phenomenon, building an accurate mental model of gravity. That model has allowed you to successfully navigate your world, not once jumping to your death off a high cliff or out an open window. Countless other mental models guide you through your day, allowing you to anticipate the outcome of your actions and interactions.
This process is called “intelligence.”
Although we take it for granted, intelligence requires each of us to continuously perform three basic steps – (1) observe our world, (2) generalize our experiences, and (3) build useful and reliable mental models. We use these mental models to make good decisions, planning our actions by skillfully predicting their most likely consequences.
Unfortunately, our mental models are being deeply distorted by social media.
That’s because media platforms have inserted themselves into this critical three step process by feeding us curated news, targeted ads, and algorithmically shared content. While our brains expect our daily interactions to provide a representative depiction of our world, we now live in a mediated reality where each of us is delivered custom-selected content throughout our days. This causes us to incorrectly generalize our experiences, which in turn drives us to build flawed mental models.
This is especially true as we build models of our own society, for we no longer encounter ideas and perspectives based on how common they are within our communities. Instead, we are actively fed content based on whether it happens to engage our individual attention. Such targeting makes it impossible for our brains to accurately generalized our world, for we can easily be led to believe that rare beliefs are common and common beliefs are rare. And if we can’t generalize accurately, the models we form about our own society – our own communities – will be deeply distorted.
This would be like an evil scientist raising a group of toddlers in a staged world where most objects are filled with helium and only a few drop to the ground. Those children would develop highly flawed mental models of gravity in which most things fall upward. And it’s not because helium balloons are “fake news” – they really exist. It’s because our brains are built to find patterns in the world, distinguishing between common occurrences and rare events to build useful predictive models.
This is why more and more people are buying into absurd conspiracies, doubting well-proven scientific and medical facts, and losing trust in well-respected institutions. Social media is making it harder and harder for each of us to distinguish between the few rare helium balloons we encounter by chance and the world of solid objects that reflects our common reality.
So, how do we fix social media?
Lawmakers in Europe have been exploring consumer protections in their Digital Services Act (DSA) which includes proposals for restricting targeted advertising. Lawmakers in the US are looking into similar issues with the Banning Surveillance Advertising Act. These are valuable proposals, but unfortunately the problem goes far beyond targeted ads to all forms of targeted content. After all, social media platforms are distorting our view of society more significantly through targeted news and algorithmically shared content than by most forms of targeted advertising.
And it’s not just lawmakers that are focusing too narrowly on ads. Currently, Twitter and Facebook allow users to access a small amount of data about targeted advertisements. To get this information, you need to click multiple times, at which point you get an oddly sparse message such as – “You might be seeing this ad because Company X wants to reach people who are located here: the United States.” That’s hardly enlightening. We need real transparency in targeting, and not just for ads but for news feeds and all other shared content deployed through sharing and targeting algorithms.
And no, we will not solve this by simply publishing the underlying algorithms as recently proposed by Elon Musk in his bid to acquire Twitter. Access to algorithms is certainly useful for journalists and academics when it comes to exposing their impacts, but their impacts will remain unchanged. In fact, it’s very likely that exposing the algorithms will allow bad actors to better game the system, optimizing their efforts to push misinformation and disinformation as efficiently as possible.
The fact is, consumers don’t need to see the algorithms – we need to be given accurate contextual information that allows us to distinguish between content that is bouncing around an echo chamber and content that is broadly accepted by society at large. In other words, we need to demand changes in social media practice that will restore our human ability to build accurate mental models.
We could do this by requiring platforms to clearly disclose the context associated with every piece of targeted content, allowing users to easily distinguish between commonly shared information and fringe notions that only resonate in small pockets of society. No – I’m not suggesting that commonly shared ideas are more accurate than fringe beliefs. In fact, I’m passing no judgement on the content itself. I’m saying that in order for our brains to form effective mental models of our own society, we need accurate context so that we can correctly generalize how the information we encounter fits into our world.
And we need the context at the moment we engage. For example, users could be given simple visual cues about how large or narrow a slice of the general public is being targeted with each piece of content we encounter. And users should not have to click to get this information – it should automatically appear. It could be as simple as a pie chart showing what percent of society falls within the targeting and sharing parameters, revealing if it’s a broad swath or a narrowly curated segment.
If I can quickly see that a piece of material is only percolating among a 0.6% sliver of the public, that should allow me to better generalize how it fits into society versus content shared within a 60% slice. Of course, I may still deeply agree with certain fringe ideas, but at least I’ll know that those perspectives are outside the mainstream and not get fooled into believing they are commonly held beliefs.
Social media is damaging our intelligence by disrupting our natural ability to accurately model our world. Many steps are needed to fix big tech platforms, but transparency in targeting would be a major step in the right direction. If done thoughtfully, it could help ensure users aren’t fooled into believing that a story they keep encountering about lizard people meeting in the basements of bowling alleys to plan a revolution is a widely accepted notion. It is not.
Written by:
Louis Rosenberg, Ph.D.
CEO, Unanimous AI
Advisory Board Member, Future of Marketing Institute
Author Bio: Louis Rosenberg, PhD is a lifelong technologist in the field of human-computer interaction. Thirty years ago, he developed the first functional augmented reality system for Air Force Research Laboratory (AFRL). He then founded the early virtual reality company Immersion Corporation (IMMR Nasdaq) in 1993 and the early augmented reality company Outland Research in 2004. He is currently CEO and Chief Scientist of Unanimous AI, a company that amplifies the intelligence of human groups. Rosenberg earned his PhD from Stanford, was a professor at California State University, and has been awarded over 300 patents for his work in VR, AR, and AI.
Photo by Kajetan Sumila on Unsplash