> The problem is, manual moderating systems are also a no-go at scale. So what do you do?
Is it? I'd have imagined there's some ratio of 1 mod per N people for N not too small that would make things work. It might be more like 1:1K instead of the 1:1M (or whatever) that companies prefer, but is that genuinely unaffordable or is it just that companies don't like to pay for moderators?
I think your points are valid. But I'm not sure if I see them as moderation issues. As I see it, there's the question of what should be allowed, and there's the question of how to enforce it adequately. As I see it, the former is a policy issue (and likely tougher), whereas the latter is a moderation issue.
I 100% agree that moderation could be lightyears better.
However i see this as being a bit like the BBC problem: beleiving there is such a thing as being objective, there just isnt.
You can have a group of likeminded people who mostly agree and need a bit of moderation for when someones external bad day leaks into their interactions on the plaform. However if there is an underlying fundamental difference of opinion or goals then the moderators become just another weapon.
We need leaders not moderators. Moderators in difference of opinion just end up punishing the weak one way or another. Leaders show by example what we can aspire to, which in turn guides moderators in how to apply the rules.
If you have global platforms who are the leaders? What direction should they lead that works for china and the US?
Edit: to be clear if you cannot afford enough moderators to stop child abuse or murder videos being posted then shut your platform down as its not viable. I am more tlaking about how moderation gets weaponised etc.
Because to truly moderate YouYube, you have to actually watch the videos, and YouTube says they get 500 hours of video uploaded every minute[1], it's pretty easy to estimate the staffing it would take each video once. They're basically getting 30,000 minutes of video every minute, so if they have 30,000 people watching it, 24/7, it basically works. You need 4.2 fourty hour weeks to cover a whole week, so 126,000 people will do it. They've also report 2 billion monthly users, so around 1:16k moderator:user would get full coverage of videos; plus another team for comments. And of course a team of 1/8th of a million won't manage itself, and you need an escalation path, and non-robot moderator are actually not robots, so they won't (and shouldn't) spend their entire worktime watching.
126,000 workers actually seems reasonable to find, the 2 million you'd need at a 1:1k ratio would be hard, I think. The pay is going to be bad, and the garbage you have to watch won't be worth it for most, but you still need a reasonably analytical person to decide if things definitely follow the rules, definitely don't follow the rules, or need to be escalated. And they need to be attentive, because videos can be totally normal and fine and then get abusive later.
The notion that adequate moderation requires watching every second of video at 1:1 speed is such a strawman on so many levels I don't even know how to respond.
Exactly what I was thinking. There are obviously significantly more effective ways to do things. You could probably just keep the automated systems in place, but have human moderators verify every video that the automated moderators flag (as you stated, obviously not by watching the entire video at 1:1 speed). That alone would be strictly better than how they are currently doing things, and probably not too difficult to pull off.
I was under the impression that bans were automated, and appeals were handled by humans. I think it would be better if ban “suggestions” were automated and actual bans always came from a human.
$20k annually is below poverty line. I'm sure Google would love to be able to get people to solve their moderation problem for so little money they can't actually afford to both eat and pay rent, but if we're making hypotheticals here, we should at last hypothesize 126,000 people who get to live like real human beings.
I'm sorry, i actually had £20k, and just changed it to $ without converting; £20k is about what a 'Accounts Administrator' earns (https://uk.jobted.com/salary). So that's about $28k which is about 3.5 billion, or about 18% of their 2020 profits. Still very reasonable, most businesses have to put more than half their turnover into wages..
The other thing I'd say is that this isn't low-skill work. If you get thousands of people, train them for a couple days, pay poverty wages and tell them to make each decision in less than 30 seconds, you're not going to get good results.
Facebook would need to hire around 2.489 million moderators to meet a 1:1k ratio. Even at 1:10k, you’re looking at increasing the size of the company six-fold.
I gave 1:1K as just a random number, I don't know how much it would actually need. I'd expect it to be much lower. Maybe like 1:50k or less.
What exactly is the problem you're pointing to? Are you saying Facebook couldn't afford to pay that many mods? Note that they don't have to be employees or in Facebook offices...
Is it? I'd have imagined there's some ratio of 1 mod per N people for N not too small that would make things work. It might be more like 1:1K instead of the 1:1M (or whatever) that companies prefer, but is that genuinely unaffordable or is it just that companies don't like to pay for moderators?