It's interesting to see how each social app is approaching political content as we enter what promises to be a very raucous election season.
In addition to the US election on November 5th, India is voting this year in May, the European Union in June, the UK, South Africa, and dozens more.
And with all of this happening, among nations that see very high social media usage, each platform is looking to either reduce or expand the presence of political debate within their apps. Which could have a pretty significant impact on voting outcomes, depending on how things actually play out.
First off, you have Meta, which is now taking definitive steps to reduce the presence of political content in its apps.
Last week, Meta said it would end the Facebook News project and end all of its agreements with local news publishers in the latest efforts to cut down on news content-and what follows, the conversation-on Facebook and Instagram.
That effort has actually been going on for a while. In 2021, in the wake of Capitol Riots, Meta CEO Mark Zuckerberg noted that one of the most common notes of feedback that the company had been getting from its users was that they didn't want to have divisive political content in their feeds anymore.
Only after years of angst-ridden posts, raising the tension between friends and family, did Meta realize that the negative association of such wasn't worth the engagement benefits. Which then set the Meta team on a mission to reduce political content, however it could. That's since seen the company lean further into AI-recommended content, mostly via short-form video clips, which have since become a far bigger element in user feeds.
In fact, for Q4 last year, AI-based content recommendations comprised 40% of the content that people see on Instagram, while that figure comes in just a little lower at 35% on Facebook. The result? Over the past year, Meta has witnessed an uptick in the time spent on Facebook by an increase of 7%, while the same for IG stands at 6%.
Compared to this, news content now stands at less than 3% of what people see in their Facebook feeds and is dwindling.
Given that, plus the reputational harm news and its related political debate has caused Meta in the past, it's now going to take the step away from it as an entity altogether to go instead for a lighter, entertainment-based content approach.
That is also aside from the fact that you should further consider the huge losses in the form of news and political debate Meta has incurred over recent years.
Cumulatively, and, in particular, due to the scandal with Cambridge Analytica, in which, according to media reports, political actors have grabbed Facebook user data in order to create voter influence operations, Meta has paid nearly $6 billion in direct costs-from a fine from the FTC, settlement over the "data breach", and fines from the US Securities and Exchange Commission.
But the reputational hit may have been even deeper. Last year Meta said Apple's new opt-in data tracking prompts would cost its ad business over $10 billion that year alone, and this too can be traced back to an erosion of trust in the company because of the Cambridge Analytica fiasco.
On balance, then, it makes sense for Meta to get out of news and politics where it can. But what will that ultimately mean for voters?
According to Pew Research, 30% of Americans get at least some of their news input from Facebook, and with the platform deliberately moving away from such, that has to have an impact.
The end result is that Meta can hold its hands high and proclaim it had no role to play in the outcome of any election, and what may be that outcome, which may well help it to avoid the same kind of negative headlines. Is that good for democracy, and will this lead to a less informed public?
The platform once known as Twitter is doubling down on political discourse, with X owner Elon Musk using the service as his personal bullhorn to sound alarms on whatever issue he's concerned about day-to-day.
Musk frequently posts about immigration into the U.S., the war in Ukraine, drug policies, corporate governance concerns and the perceived decline of many U.S. cities. Musk also finger points constantly at various politicians, from the President down, and as the most followed person in the app, who is also (reportedly) tilted the algorithm in favor of showing his posts to more people, he alone has significant influence over the general discussion among X users.
For one, X's more free-speech aligned content moderation approach, where it is forced to rely even more heavily on its crowd-sourced "Community Notes" to police misinformation has left it open to greater manipulation, which may further skew political discussion in the app.
But that's about the way it is, says Elon wants it, with his declared view that people should see all opinions, no matter how wrong or uninformed, in order for them to decide what they believe and what they don't.
Which seems to ignore the past harms caused by such, but nevertheless, Musk believes that no one should be censored as such and all opinions examined on merits.
Which then ratchets up political debate within the app. And with Meta looking to reduce political and news content, X may actually be winning out in becoming the social app of choice to engage in political debate.
Is that a good thing?
So, I mean, theoretically, as noted, X's Community Notes system ought to allow "the people" to vote on what they think and what should be left, or "noted" in the app. But Community Notes are only displayed on posts after members of opposing political persuasions agree that a note is necessary. Which, for many of the most contentious debates about which side is motivated by ideology or sincerity, is never going to happen, so for a lot of claims, X is enabling the dissemination of misinformation.
Indeed, the growth of Community Notes has revealed little in studies as a means of decreasing engagement in mis-facts in the app.
And with charges that coordinated groups are already active in the Community Notes system, and working to amplify and/or quash notes that go against their own agendas, and that those groups may well be operating on behalf of foreign governments, it does seem like X is offering little protection from voter manipulation heading into the election period.
Which could distort political debate and subsequent voter outcomes as Musk would himself try to influence voters towards Republican candidates.
X has much less direct impact in its feed than Facebook: Pew Research data indicates 12% of U.S. adults regularly receives news content in the app. But X/Twitter has always had an outsized impact on related discussion, given its usage among the most fervent newshounds and reporters, who draw much of their information from the app, then distribute such elsewhere.
That's why Donald Trump was able to use Twitter to such great effect, and likely why Elon was so attracted to it.
The result then is that you are going to have more voters more influenced by misinformation via the app, with some of the most divisive, angst-inducing claims already stemming from X posts.
Will that influence the outcomes of elections? Probably yes and without anything from Meta to counter it, that is a pretty big deal.
As such, it seems in the final reckoning Meta is more concerned about the interests of its business than its role, or lack thereof, in politics. Which again makes sense if weighed against the cost-benefit of the same for the company. The concern, however, is that X-sourced, unfounded conspiracies are going to infect the minds of enough voters to sway the outcome of each poll, which could cause considerably more damage in the long term.
Of course, that's not where Meta needs to be as the arbiter in this, and it's worth mentioning too how TikTok may have an equally vexed position, what with its claimed ties to the Chinese Government and how that may connect back to what one sees there.
But it is a potentially ominous event, going into the various polls, with X's more "free for all" approach looking far more like the situation in the run up to the 2016 election, as opposed to the lessons learned from them.
The worst part is nothing can seem to be done about this, and all of this analysis and attribution will be done in retrospect.
And to too many, many people, that would already be too late.