Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone else starting to feel uncomfortable with the rate of progress?


I'm not worried about unemployment, although that is a problem. I'm more worried about bad actors being able to flood the web (even more than it already is) with realistic-enough content that makes it utterly unusable and unreliable.

Imagine entire subreddits consisting of posts, comments, memes, and photos, and 100% of it is pro-[insert authoritarian regime] and it essentially only cost $1M to do it.


I can do all of this right now from a handful of FREE Google accounts with Colab right now.

The only part that could be improved with money is getting dedicated mobile modems for unique IP addresses, to evade spam detection.


Honestly, I’m worried that shocking pornographic depictions of every women who’s ever posted her face online is coming. AI’s first big splash in our society is going to be a traumatic sexual assault of all women.


How is drawing imaginary pictures sexual assault?


That's not assault, but it can definitely be used for harassment and other forms of social damaging. I posted this an excerpt from an article (on anonymous Telegram groups) a few days ago:

Filing charges is pointless, says Ezra. Since two years, she's being harassed on Telegram. It started when she was sixteen: photoshopped nudes with her snapchat account were circulated. They had taken selfies from her social media, and those of her family, and combined them with porn fragments. She doesn't know the perpetrator, but that person takes a lot of trouble to ruin her. "Nowadays, the boys have so many ways to make it look real." source: de Groene Amsterdammer,146/33, p. 21.


If anything, widespread use and understanding of this technology will help with situations like these. Teenagers in 10 years would absolutely not be impressed with a nude picture that has not been somehow verified as legitimate.


That's entirely unfounded optimism, or –less politely put– sticking your head in the sand. Hasn't the printing press shown how easy it is to slander? Has internet taught you nothing about misinformation?


And do you propose to stop either to deter misinformation? Or maybe the pros outweigh the cons?


In the same way that speech is now considered violence by an unfortunately large amount of the population, I guess.


Many seem to forget that Photoshop exists. People have been taking others faces and overlaying them to all sorts of images for years. Nothing about this is new and it hasn’t put society on fire.



Yep this technology is super impressive, there’s a chance some tweak can turn it into something scary.

* Train network on thousands of assembly instructions.

* Prompt ‘some bad weapon of this size and material’

* Result simple instructions how to go build it.


It’s actually incapable of doing any of that, for now. Deep learning can’t generalize and it doesn’t understand plans or schematics.

Lots of smart people have been trying to get it to have capabilities anywhere close to what you’re describing for years now, to no luck.

We’re safe for now.


Neural networks generalize, otherwise they would not be as powerful as they are today (and I don't know how you can deny that). If your neural network does not generalize then the model is overfitted.


They don't generalize *well. "Deep Learning" as it is done currently, is very limited in its ability to generalize to out of distribution tasks. This is a major area of discussion in research.

The type of generalization necessary to perform what the parent was talking about for instance (synthesizing schematics) is (currently) not possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: