> if this level of moral policing we see from hysterical do-gooders in tech were around when the internet was first emerging.
Speaking as someone who was there: It was around, it’s just that it was social consequences that were the method of controlling bad actors.
The designers & mentality in general then was foolishly optimistic and utopian in sensibilities.
It didn’t take long for abuse, spam, and bad actors, to ruin so much. We lost more than a decade of tech ideas & communication due to those attitudes.
You still see it today in terrible UGC moderation policy retarding participation of those who are not bad actors.
So while I have sympathy for your view, and I do think there’s something to be said about black box gatekeeping of AI, I’ve seen what happens when we do it your way: it leads to massive drains on productivity & in many cases simply failure
I don't think they were foolishly optimisitic. Society was just literally higher trust back then and various factors have eroded that over the decades in a way they probably wouldn't have predicted.
The very small, and reasonably tight nit communities that were online were higher trust.
And part of that was because there were far more potential real world consequences because the networks were small enough that even by the time I got online in '93 or so, if I you did something serious I'd be able to find a sysadmin etc. at your school or workplace or one of the few commercial ISPs and get someone to take it on themselves to get personally involved in rectifying the issue.
That doesn't scale very well.
By the time I co-founded my first company - an ISP - in '95, it was already rapidly starting to break down, as more and more people online with only a vague, impersonal commercial relationship to their network providers and who had options that meant consequences were rapidly diminishing.
High trust societies are so much more enjoyable and carefree. It’s too bad the culture now is to exploit everything so such places are going the way of the dodo.
It wasn’t so much higher trust, so much as naïveté. The lack of direct experience & exposure to that world. The lack of understanding of just how easy it was to fool people, and the lack of widespread exposure & attack surfaces to bad actors.
Computing changed the scale. The problems were preexisting though.
Speaking as someone who was there: It was around, it’s just that it was social consequences that were the method of controlling bad actors.
The designers & mentality in general then was foolishly optimistic and utopian in sensibilities.
It didn’t take long for abuse, spam, and bad actors, to ruin so much. We lost more than a decade of tech ideas & communication due to those attitudes.
You still see it today in terrible UGC moderation policy retarding participation of those who are not bad actors.
So while I have sympathy for your view, and I do think there’s something to be said about black box gatekeeping of AI, I’ve seen what happens when we do it your way: it leads to massive drains on productivity & in many cases simply failure