That may be something to consider for someone who wants to glean the greatest mileage on Threads.
After multiple attempts, though, many have learned that posting questions to the conversation tends usually to be the most effective way to elicit engagement on Threads, thus reach. Which should not come as a surprise, as this type of interaction always serves to the advantage of social platform algorithms. But many users of Threads have taken this a step further and are intentionally posing inflammatory or divisive questions with the goal of inciting a response.
Business Insider reporter Katie Notopoulos conducted the whole experiment in "rage baiting," posting all sorts of questions like that in order to get the most responses.
And it worked. As you can see, this post itself generated over 3,000 replies, and over time, Notopoulos built pretty strong presence here in Threads, based on people replying to her artificial posts.
With politics ruled out, questionable rants like this are the next best things to emotionally stimulate comments-the new holy grail of maxing out comments. Indeed, research has shown that posts that incite anger, fear and/or joy are the best drivers of user engagement.
So it stands to reason that engagement farmers on Threads would lean into this, but today, Threads chief Adam Mosseri said that they are aware that this is a problem and need to fix it.
But the creator isn't really asking a question; it's an engagement tactic, and somehow, Mosseri and Co. are now going to try and re-jig the Threads algorithm to penalise such.
Which will be hard.
Because Threads, of course, wants comments and interaction, so it is good for the platform to facilitate such. It just needs to ensure that it's genuine, or it risks flooding people with junk posts that will turn them off.
But how do you separate the wheat from the chaff in this process, and identify which posts are "rage bait" and which are genuine queries?
Detecting AI images is just one avenue, but again, Meta is nudging toward further use of generative AI-so, that does not align with its overall strategy.
Meta's AI systems are constantly evolving; with this, Meta craves more natural questions asked within its applications, for it can then leverage those answers to make even more human-like responses to the most frequently asked questions in its chatbot.
So more questions is a good thing, but Meta somehow wants to dilute the bait, while still hooking the fish.
Without manual intervention, that will be a pretty tough problem to solve, and maybe that will be the solution, for Meta's moderators to check in on rapidly trending posts, and downgrade them if they're obvious junk.
But that might also be noteworthy. If you are looking for ways to increase the value of your presence on Threads, Meta may or may not be penalizing some forms of engagement bait like this. Somehow.