I define a radical group to be some collection that seems to be consistently peddling false material for their own ends, with an overly provoking tilt that makes it easy to go viral. Some specifics out of the past year would be vaccine misinformation, prejudice that incites racial violence in Myanmar, organizing an occupation to the capital in the US. Shouldn't we put a stop to these communities if they sway people to start peddling this information themselves?
At least in the US, free speech is most free within public forums. But even then we already define some speech that is too dangerous if it's known to lead to poor outcomes. You can't yell fire in a crowded room, not because yelling fire is inherently a crime, but because it's going to incite people to a detrimental ends.
Plus social networks don't have a constitutional obligation to be town squares where free speech can spread unfettered. You have to draw a line somewhere. And as the people who write the algorithms that can amplify or bury content on these networks, I think it _is_ our obligation to at least set parameters on what constitutes good/healthy content interactions on these social platforms and what doesn't. The ML algorithms have to optimize over some loss function.
Not that I disagree with you - but "can't yell fire in a crowded room" is slightly misconstrued. As those aren't the original words from the U.S. Supreme Court case. [0]
Additionally, the idea of 'clear and present danger' has been modified within the past 100 years since the said court case. The Supreme court since then has stated:
"The government cannot punish inflammatory speech unless that speech is 'directed to inciting or producing imminent lawless action and is likely to incite or produce such action'". Said definition has changed and depends upon situations, where some action is imminent or "at some ambiguous future date".[1]
At least in the US, free speech is most free within public forums. But even then we already define some speech that is too dangerous if it's known to lead to poor outcomes. You can't yell fire in a crowded room, not because yelling fire is inherently a crime, but because it's going to incite people to a detrimental ends.
Plus social networks don't have a constitutional obligation to be town squares where free speech can spread unfettered. You have to draw a line somewhere. And as the people who write the algorithms that can amplify or bury content on these networks, I think it _is_ our obligation to at least set parameters on what constitutes good/healthy content interactions on these social platforms and what doesn't. The ML algorithms have to optimize over some loss function.