Importance of this pivotal team in three stages
In the last two decades, online platforms that permit users to interact and upload content for others to view have become integral to many people’s lives and benefit society. In recent years, moderation of social or communication technology products has become very relevant and high demand. Such services, including our iFunny, iDaPrikol, WHLSM, and ABPV, were created to bring people closer and allow them to share content and make new friends.
However, companies did not consider any ways to implement full content pre-moderation, news verification, or elimination of potentially malicious humor. But that is not the point now. We do not want to bring up the question of how others cope with moderation. We want to dig a little deeper into how the example of our products organizes the process and, most importantly, to talk about a moderator’s profession, how it can be dangerous for employees in this position, and why we do need a corporate psychologist.
In our company, the moderation of any content created by users consists of 3 stages.
Pre-moderation with the help of ML and AI algorithms.
All content undergoes through such scanning, and, at this stage, the most simple violations with apparent signs to be caught by being excluded — for example, porn, injuries, deaths, some words, and phrases, etc.
Here everything seems to be simple and straightforward, but in reality, it’s not. To make algorithms work and learn correctly, you need to feed them with already marked examples of the rule to be followed. And it would seem that there are paid and free ready solutions, but most of them will miss improvements and relearning will have low Success Rate, and then there is no sense in this stage.
That’s why, after all, employees have to face thousands and tens of thousands of examples of forbidden content to do their job correctly. Unfortunately, companies often do not consider the impact of such work on employees’ mental health. It’s the same as with police officers, doctors, and medical experts; not everyone is ready to do their job and watch how others do it. Their agreement to do this work is not enough to take the candidate for this position, so we ask our corporate psychologist to talk to the candidates and recommend doing this job.
It is not as critical because interaction with such content is not systematic and can not significantly harm the employee. But still, we give extra days-off to compensate for the specifics of work in this direction.
(“AI content moderation,” Credit: FunCorp)
It is a human pre-moderation of all uploaded content performed by the team “Customs.”
This team does precisely 99% of the manual work, interacts systematically with the content, and prevents inappropriate content from getting into the content feeds.
This team has been working with us since 2012 and has undergone many changes. From a group of 2, it has grown to 100, and the number of rules has increased from 24 in 2012 to 136 in 2020.
Today, we have three shifts that are changing every 8 hours, and it’s like this every day. For the last two years, we have developed the rules of work for this team that allow balancing between the result and the mental health of employees:
- No more than four work shifts per week with the same wage. This means that this team’s employees have an extra day off each week. And they can try to arrange the schedule so that this day is in the middle of the week. It is necessary to take a break and have a rest.
- Obligatory admission to work by a corporate psychologist after the interview.
- Monthly review with the manager and the corporate psychologist. If burnout is detected, the person has unscheduled days-off or holidays.
(“Customs team,” Credit: FunCorp)
Thus, the moderation team has a comparable, and in some moments, a better social package than engineers have. And for us, as an employer, maintaining such a balance is a priority.
The company’s products are growing; the number of employees in the moderation team is growing too. Our goal is that they get only positive things without acquiring any mental problems while working with us.
Post-moderation and “NSA”.
A small team of 5–7 people reviews all user complaints and responds to them within 24 hours and works in the application and searches violations that were not eliminated in stages 1 and 2.
Many have demanded that various internet platforms “do more” about content moderation for years now. In response, large tech companies have hired thousands of content moderators to do this work. These moderators must perform a complex balancing act: They must follow policies, keep users safe, protect free speech online, and ensure that the product still thrives in the marketplace. Doing so requires moderators to drown themselves in many hours of content.
Many assume that large tech companies can easily hide the worst parts of humanity that find their way onto the internet. But there is no easy solution to what is happening online.
What is happening online is a reflection of our society. Tech companies — and content moderators in particular — cannot magically fix the evil found within humanity, nor can we prevent it from finding its way online.
Can improvements be made? Certainly. Decisions makers and the public need to understand just what content moderation is and the consequences of tinkering with it before drawing conclusions or making demands.