X is rolling out some new tweaks to its Community Notes user moderation process, which include an updated alerts feed that will prompt more note ratings and an updated approach to the scoring algorithm to improve note stability.
First off, on note rating. The Community Notes system is based on a range of inputs from contributors for every note that's appended to a post within the app. For each note, X invites Community Notes participants to rate the information presented, and once a note has enough votes, from people on both sides of the political spectrum, only then is it displayed to all users in the app.
For the moment, X is eying the prospect of focusing on more notes that have to be rated, in a concerted effort to get more detail from responses about these notes.
now, Community Notes contributors will see an updated "Needs Your Help" timeline in the Community Notes section highlighting newly added notes that need more ratings.
That will contribute to more being checked faster, leading to potentially more being shown in-app.
X has also improved its algorithm for scoring Community Notes so that fewer Community Notes are added and then disappear from the app.
Because Community Notes are displayed based on consensus (or not), votes for and against each are very likely to shift substantially over time, so some posts are "noted", then that note disappears as more votes change the balance.
This was especially evident on Elon Musk's recent post about potentially removing the blocking option in the app.
X says that this new approach will reduce the number of notes that are shown in the app but increase the likelihood that displayed notes will stick, even as more ratings come in.
Amongst all the changes Elon Musk has been making at X-the app that used to be called Twitter-Community Notes, on one hand, seems to be the one with the most hope for actual adoption, and providing an alternative solution to the platform's moderation concerns.
For years, social apps have been trying to work out how they can better align with community expectations around such while limiting the amount they need to step in to make moderation rules and decisions. Because, despite some of the more recent accusations, none of them actually want to interfere, because more speech, and specifically, more divisive speech, is actually good for business, as it drives more intense discussion, more debate, and more engagement in their apps.
Meta's CEO, Mark Zuckerberg, has continuously highlighted free expression as important (whether or not we want to hear what is being said), while Elon Musk has also been a long-time free speech advocate-or at least its forms that do not affect him personally.
Free speech isn't just a good moral foundation to base such platforms around-or take a stand on-but it's also more profitable for their bottom line so the platforms logically will gravitate towards any solution that allows them to minimize their interference, which Community Notes could facilitate.
The other viable, community-led moderation approach which works has been up-and-downvotes and every platform has also experimented with.
This model has seen great success for Reddit, yet at the same time, Reddit is also struggling with problems arising from overreliance on user moderation as it strives to maximize its business opportunities.
Eventually, Reddit might have to be forced to hire more moderators to get the job done because volunteers turn against the decisions of the platform, while also seeking their cut of ad revenue, given that they're the ones moderating everything.
But on a very basic level, up and downvotes do create an instantaneous audience reaction to content on the platform, which can also serve as a tool of moderation, however imperfect that might be.
Community Notes is even more closely aligned with moderation calls and ensuring a good spread of user viewpoints are included in the results. It does have limitations too, more specifically about how divisive issues are addressed, given that broad agreement must be reached in order for a Community Note to appear.
On some of the most controversial topics, that agreement will never be met, which subsequently means that Community Notes will never be shown on these posts. Which then puts the onus back on X's human moderation team once again, but for more broadly accepted violations, like fake images, scam ads, false representation, all of these are being debunked and highlighted by Community Notes.
Which shows promise, and it could be that eventually X will be able to refine its system to a point where this is actually a real solution for many content concerns.
It's not there yet, but it's interesting to see how Community Notes is evolving and where it can be helpful in addressing these elements.