Meta is going to protect creators from penalties by setting up a new system wherein creators, who commit an infraction on Facebook for the first time will be allowed to finish the education process regarding the specific policy under question before that warning is removed.
As Meta said
"Now, when a creator violates our Community Standards for the first time, they will receive a notification to complete an in-app educational training about the policy they violated. Upon completion, their warning will be removed from their record and if they avoid another violation for one year, they can participate in the "remove your warning" experience again."
That's essentially the same process YouTube rolled out last year, whereby first-time violators of community standards can complete a training course without facing a channel strike.
In all instances, egregious violations will indeed still result in an immediate penalty.
"Posting content that includes sexual exploitation, the sale of high-risk drugs, or glorification of dangerous organizations and individuals are ineligible for warning removal. We will still remove content when it violates our policies."
So it's not a policy change, per se, but in enforcement, providing those committing lesser rule violations a mechanism to learn from what could be an honest mistake, as opposed to punishing them with restrictions.
Although if you do incur successive breaches within a 12-month period, even if you do complete these courses, you will still suffer account penalties.
The alternative affords greater freedom to creators and, instead, puts the focus on a course of education to clarity and understanding-more so than heavy-handed application. In fact, one of the significant recommendations from Meta's independent Oversight Board has been that Meta make an effort to be clearer and more explanatory of its reasoning behind which profile penalties are being enacted.
What boils down much of the time is a result of misunderstandings, especially where the elements are not as transparent. As described by the Oversight Board:
"People often say that Meta removes posts that highlight hate speech for purposes of condemnation, mockery or warning because automated systems – and sometimes human reviewers – cannot differentiate these types of posts from hate speech itself. To deal with the situation, we sought Meta to create a user-friendly means for users to declare in their appeal that their post fell into one of those categories.".
In some senses, you could almost see the space where Facebook's more binary definitions of content could go wrong. That is especially true as Meta becomes increasingly reliant on automated systems to help with detection.
So, now you will have some recourse if you cop a Facebook penalty, though you will only get one per year. So it is not a major change but one helpful in certain contexts.