Social Media Fuels Division and Angst – But Solving the Underlying Points at Play is Massively Advanced

In spite of various studies, and Counter studiesFunded largely by the networks themselves, social media remains an extremely problematic vehicle for divisive news and harmful movements.

But its influence is often misunderstood, or elements are merged for various reasons to obscure the facts. The real influence of social isn’t necessarily due to algorithms or reinforcement as focus elements. The greatest harm comes from the connection itself and the ability to fit into the thoughts of people you know, something that was not possible in the past.

Here’s an example – let’s say you are fully vaccinated against COVID, you have complete confidence in the science, and do what health officials recommended, no problems, no concerns about the process. But then you see a post from your old friend – let’s call him “Dave” – ​​where Dave expresses his concerns about the vaccine and why he is reluctant to get it.

You may not have spoken to Dave in years, but you like him, you respect his opinion. Suddenly this is not a faceless, nameless activist who is easy to dismiss, this is someone you know and you wonder if there is maybe more behind the anti-Vax push than you thought. Dave never looked stupid or gullible, so maybe you should take a closer look.

You do – you read links posted by Dave, you read posts and articles, you might even search a few groups to understand them better. You may also start posting comments on Anti-Vax articles, and all of this tells Facebook’s algorithms that you are interested in this topic and that you find yourself doing similar posts more and more often. The recommendations start to change in your feed, you get into the topic more and all of this takes you further to one side or the other of the reasoning that drives the division.

But it didn’t start with the algorithm, which is a central refutation in Counter arguments from Meta. It started with Dave, someone you know, who posted an opinion that piqued your interest.

It is for this reason that broader campaigns aimed at manipulating public opinion are such a concern. the Disruption campaigns orchestrated by Russia’s Internet Research Agency in the run-up to the 2016 US election are the most public example, but similar advances are happening all the time. Reports surfaced last week that the Indian government Bot-powered brute force campaigns on social networks to “flood the zone” and postpone public debate on specific topics by trending alternative topics on Facebook and Twitter. Many NFT and crypto projects are now trying to capitalize on the broader hype by using Twitter bots to make their offers appear more popular and reputable than they are.

Most people today are naturally more and more suspicious of such advances and are more likely to question what they see online. But similar to classic Nigerian email scams, it only takes a very small number of people to hook up, and the whole hassle is well worth it. The labor costs are low and the process can be largely automated. And few Daves can have a huge impact on public discourse.

The motives for these campaigns are complex. The Indian government’s case is to control public discourse and quell possible disagreements, while scammers are about money. There are many reasons such pushes are being made, but there is no question that social media has provided a valuable, workable link for this effort.

But counter-arguments are selective. Meta says that political content only a small fraction of all the material shared on Facebook. That may be true, but this only applies to shared articles, not personal contributions and group discussions. Meta says that too divisive content is actually bad for business because how CEO Mark Zuckerberg explains:

We make money off of ads and advertisers keep telling us that they don’t want their ads to be next to harmful or angry content. And I don’t know of any technology company that aims to develop products that make people angry or depressed. The moral, business, and product incentives all point in the opposite direction.

At the same time is metas own research has also shown the power of Facebook in influencing public opinion, particularly in a political context.

In 2010, around 340,000 additional voters took part in the US Congressional election because a single Facebook message on election day boosted by Facebook.

According to the to learn:

“About 611,000 users (1%) received an ‘informational message’ at the top of their newsfeeds encouraging them to vote, providing a link to information about local polling stations and a clickable ‘I have voted’ button and a counter for Facebook users, who clicked on it. About 60 million users (98%) received a “social message” that contained the same elements but also showed the profile pictures of up to six randomly selected Facebook friends who had clicked the “I voted” button. The remaining 1% of users were assigned to a control group that did not receive a message. “

Facebook message on election day

The results showed that those who saw the second message with pictures of their connections were more likely to vote, eventually leading to 340,000 more people voting based on peer nudge. And that’s only a small measure for Facebook, with 60 million users, with the platform now reaching 3 billion monthly active around the world.

Based on Facebook’s own evidence, it’s clear that the platform actually has significant influential power through peer insights and face-to-face exchanges.

So it is not Facebook specifically, nor the infamous news feed algorithm that are the main culprits in this process. It’s the people and what people share. Meta-CEO Mark Zuckerberg has that repeatedly pointed out:

Yes, we have major differences of opinion, perhaps more than ever in recent history. But that’s partly because we bring our issues to the table – issues that haven’t been discussed for a long time. More people from more parts of our society have a voice than ever before, and it will take time to hear those voices and put them together into a coherent narrative.

Contrary to the claim that it creates more problems, Meta sees Facebook as a vehicle for real social change, that we can achieve a better understanding through freedom of expression and that a platform should theoretically guarantee better representation for everyone and connection.

What is true from an optimistic point of view, but nonetheless, the ability of bad actors to influence these divided opinions is just as significant, and just as often these are the thoughts that are amplified in your network connections.

So what can be done beyond what Meta’s enforcement and moderation teams are already working on?

Well, probably not much. In some ways, it would seem like it would work to detect repetitive text in posts, which platforms are already doing in different ways. Limiting sharing on certain topics could have an impact too, but the best way forward is with what Meta does to work on it recognize the authors of such, and removing the networks that reinforce questionable content.

Would removing the algorithm work?

Could be. Whistleblower Frances Haugen has pointed out the news feed algorithm, and its focus primarily on promoting engagement as a key issue, as the system is effectively designed to reinforce content that stimulates argument.

This is definitely problematic in some applications, but would it keep Dave from sharing his thoughts on a topic? No, it wouldn’t, and at the same time there is nothing to suggest that the Daves of the world get their information from questionable sources, as highlighted here. But social media platforms and their algorithms make both things easier, they improve such a process and offer entirely new ways of sharing.

There are various measures that could be taken, but the effectiveness of each is highly questionable. Because a lot of it is not a social media problem, but a people problem, as Meta says. The problem is that we now have access to everyone else’s thoughts, and some of them we will disagree with.

In the past, we could go on without being aware of our differences. But in the age of social media, that’s no longer an option.

Ultimately, as Zuckerberg says, will that lead us to a more understanding, integrated, and civil society? The results so far suggest that we have a way to go here.

Leave a Reply

Your email address will not be published. Required fields are marked *