This is theoretically a good move; however, in reality, things might turn out to be the opposite.
Today, X founder Elon Musk announced a modification in the ad revenue share program created by X to prevent its users from sharing sensationalized, divisive, and even fake reports that seek to provoke as many reactions as possible with a hope of increasing their chances of being monetized.
X's creator ad revenue share scheme lets X Premium subscribers monetize their posts. They receive a percentage of ad revenue earned on ads displayed in the reply stream of that post. However, only ads served to other X Premium subscribers count.
And because not many X users are actually registered to X Premium, and most of them coincide with the ideological position of Elon Musk from his own posts, the best way, then, in order to gain as much income as possible from this program, is actually to align your posts around the issues of most relevance to this audience, which, largely, can be defined from his very posts themselves.
If Elon says that it’s important, his many supporters will pay attention, which therefore means that by posting about, say, the war in Iran, you’ll increase your chances of sparking more replies, and thus boost your monetization potential
Irony intended: Elon himself has provided an example of the sort of post that will henceforth be ineligible for monetization: that post has since been branded as misinformation. Or it had been, with a Community Note, but that's since been removed, likely because Elon's supporters have voted it down within the Notes system.
Which is a fuzzy aspect of this new amendment. Does this apply to all posts that receive a Community Note, or only the posts where the note reaches public view, according to consensus among the Notes community?
And what if a Note does get approved, and is displayed in the app, but then gets voted down again, as per the above example?
It's pretty vague, but Elon did note that "any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source."
Which is pretty standard refrain from Musk-the system will simply correct itself based on transparency.
Which probably won't work in practice because the system of Community Notes itself has been corrupted by a number of groups who work together for approving and rejecting notes according to their interests.
Community Notes are approved and rejected based on the consensus of Community Notes participants with an emphasis placed on Notes gaining approval from those holding opposing political perspectives, based on some vague formula related to in-app activity.
Though as recently reported by Wired, some Community Notes contributors have been approved on multiple accounts, which effectively enables them to double and triple vote in support of their own amendments, while Notes groups have also banded together in order to "actively coordinate on a daily basis to upvote or downvote particular notes."
So we can't fathom the scope of such manipulation yet; but since it is so easy to become a Community Notes contributor by such low standards of entry and because Notes do, indeed, have implications on the perception of information within the app, you can bet that there are all sorts of groups targeting the tool within the broader scope of their communication and propaganda efforts.
While this new amendment might have some effect in cutting down the spread of misinformation as a means to maximize creator ad revenue potential, it is unlikely to stop the manipulation of the Community Notes system. And it could, conversely, be "used as a means to do coordinated attacks against opposing perspectives", according to Musk.
X has placed much faith in the Community Notes system as a means of fighting misinformation within the app, though several reports have noted that it's not up to such tasks, and will never eradicate lies and abuse of X posts, no matter how much Musk and Co. wish that to be the case.
But now it basically has to because as part of its cost-cutting efforts, X has effectively disbanded, or dramatically reduced all of the internal teams tasked with content moderation, which means that the platform is now by and large relying on Community Notes as the key filter of the content that users see in the app.
And with the vast majority of Community Notes never shown in the app, partly because of internal debates over what is true and what isn't, it doesn't seem like a system that will be able to slow the spread of misinformation very well.
But maybe, with this new proviso, that could disincentivize some X users from posting sensationalized content for engagement.
That is to say, it hasn't stopped the app's most prominent user, but sure, maybe it'll work for everybody else.