The ‘Dead Internet Theory’ started emerging in forums like 4chan or Wizardchan in the late 2010s.
On August 31st 2021, The Atlantic published an article titled Maybe You Missed It, but the Internet ‘Died’ Five Years Ago. This brought the ‘Dead Internet Theory’ to mainstream media.
It’s a conspiracy theory…
And it suggests that a significant portion of the content you see online is bot-dominated and algorithm-driven, sidelining genuine human engagement. This means that automated systems, rather than real people, are shaping what we see and how we engage online, leading to less genuine human interaction on the internet.
There’s not much evidence backing the Dead Internet Theory…
But certain trends and observations support its claims. Keep in mind that these data points don’t definitively prove the theory but contribute to the ongoing debates:
Bot Traffic: A significant portion of Internet traffic is generated by bots. According to a 2023 report by Imperva, bots accounted for 37.9% of all web traffic. These bots range from search engine crawlers to malicious bots.
Social Media Bots: About 45% on Instagram, 15% on Twitter, and 5% on Facebook—are thought to be automated bots pretending to be real people. Additionally, a portion of followers on these platforms, estimated between 5-30%, are likely not authentic users.
The decline in organic engagement also adds to this…
Have you ever noticed that the content you see on platforms like Facebook, YouTube, and Google feels eerily tailored to your interests? That's because these platforms use sophisticated algorithms to curate content specifically for you.
While this may seem convenient, it can create what's known as an "echo chamber" or "filter bubble," where you're mainly exposed to viewpoints and content that align with your existing beliefs and interests. This can limit your exposure to diverse perspectives and potentially narrow your worldview.
Similarly, search engines like Google use algorithms to prioritize certain content over others in search results. While this aims to provide users with the most relevant information, it can also influence the visibility of user-generated content, making it harder for smaller voices to be heard.
Now, let's talk about comment sections and online communities. It’s filled with spam, repetitive content, or even automated responses. It's becoming increasingly common, making genuine human interaction harder to find.
The same goes for online forums and discussion boards. Some have experienced a decline in authentic human interaction, with bot-generated posts taking over, leaving little room for meaningful conversations.
Let’s assume it’s true. What is fueling it?
Swaying public opinion
They can really change the way people think by sharing certain ideas over and over again. Whether it's about politics, promoting a product, or spreading conspiracy theories, these bots make it seem like everyone agrees with them.
The phenomenon of vanity metrics
To a certain extent, people often measure success by the number of followers, likes, and shares someone has on social media. This can lead individuals and businesses to prioritize quantity over quality, chasing after more followers and engagement, even if it doesn't reflect genuine interest or value.
For instance, about 40% of Kylie Jenner's followers aren't real people. Out of her massive 373 million followers, that's roughly 150 million fake accounts. Her sister Kendall isn't far behind either. About 37% of her 157.6 million followers are fake.
inBeat is a tool that can estimate if a social media account has fake following
The Future of the Internet
Challenges in Detecting and Combating Bot-Driven Manipulation
Sophisticated Bots: Bots are becoming increasingly smarter, mimicking human behavior. Identifying them is challenging, especially when they blend seamlessly into online communities.
Evasive Tactics: Bot operators constantly adapt their tactics to evade detection. They use randomized posting times, varied content, and even machine learning to avoid patterns.
False Positives: Aggressive bot detection algorithms may inadvertently flag genuine users as bots, leading to false positives and unintended consequences.
The Erosion of Trust in Online Interactions
Fake Accounts: The prevalence of fake accounts undermines trust. When we can’t be sure whether an online entity is human or automated, skepticism grows.
Disinformation Campaigns: Bots spread trumped-up stories, affecting public perception. Trust in news sources and social media platforms erodes as a result.
Social Fragmentation: When trust declines, online communities fracture. People become less willing to engage in open dialogue.
The Distortion of Public Discourse and Debate
Amplification of Extreme Views: Bots often escalate unrelenting opinions, drowning out moderate voices. This polarization hinders constructive discussions.
Echo Chambers: Bots contribute to echo chambers, where people are exposed only to information that confirms their existing beliefs. This stifles critical thinking and debate.
Manipulated Trends: Bots can artificially inflate trends, making certain topics appear more significant than they truly are. This skews public discourse.
The Undermining of Democratic Processes and Informed Decision-Making
Election Interference: Bots can sway public opinion during elections by spreading propaganda or discrediting candidates. This undermines the democratic process.
Misleading Information: Bots flood social media with misleading content, making it harder for voters to make informed choices.
Confirmation Bias: Bots reinforce existing biases, leading people to seek out information that aligns with their preconceptions.
The Monetization of Bot-Driven Engagement
Clickbait and Ad Revenue: Bots generate clicks, views, and engagement, which benefit advertisers. Some platforms inadvertently profit from bot-driven interactions.
Inflated Metrics: Metrics like likes, shares, and followers lose meaning when bots artificially inflate them. This affects marketing decisions and resource allocation.
Conundrums: Platforms face ethical questions about whether to prioritize user experience or financial gains.
The Reinforcement of Biases and the Formation of Online Identities
Identity Construction: Bots shape online identities by influencing what users see and interact with. This affects how people perceive themselves and others.
Algorithmic Bias: Bots perpetuate biases present in algorithms, leading to discriminatory outcomes.
📔Glossary🔖
Echo Chamber: A situation where individuals are only exposed to information and opinions that align with their existing beliefs, further reinforced by filter bubbles.
Filter Bubbles: A situation where algorithms personalize content based on a user's preferences, limiting exposure to diverse perspectives and creating an echo chamber of similar viewpoints.
Dead Internet Theory: A conspiracy theory suggesting that much of the online content is bot-dominated and algorithm-driven, sidelining genuine human engagement.
In case you missed…
And, more can be found 👉 here.
Disclaimer: The insights provided are based on current information and may evolve due to external factors. Some sources cited are projections derived from analyzed data rather than direct references to the provided information.