Hacker Newsnew | past | comments | ask | show | jobs | submit | s1dechnl's commentslogin

I never understood why it was any different or why they attempted to compel others to 'dump everything they had publicly'. No one who comes across something as powerful as AGI is going to think : Yeah, why don't I just publish everything about it. A maturity step has seemingly occurred post funding


> To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power" what does one do with an idea or line of research that could potentially harm humanity or unduly concentrate power?

You don't publish it : See Pandora's box. You secure it. You demonstrate its capability and you focus public critical discussion from then on with the creators of it along with a panel of individuals who can probe areas of concern. Public discussion and welcomed inquiry Private and protected research and IP.

> The manipulation of social media by foreign actors armed with dumb-AI / automation was an obvious conclusion to many of us well before the Snowden leaks, but what could we do exactly?

Before foreign, there was domestic and manipulation was a central concept at the inception of these platforms. Data collection and selling for profit is aimed at manipulation. What human beings could do is be honest with themselves and others for once and stop using their intellectual capacity to manipulate, dumb, and screw others over for profit. You can't engage in negative foundations and expect positive outcomes. You can't manipulate the truth and information and pretend like its going to benefit society. https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt is a manipulative business tactic and it remains at the heart of the 'safety' problem and the nonsensical doomsday AI scenarios. Certain well funded groups and deep pockets have invested heavily in weakAI and it sits nicely with their legacy business models and they're scared of a truly disruptive technology eating their lunch. So, they concoct disinformation campaigns to steer resources/attention away from such dev groups and back into their coffers.

> I was privately concerned about the mass weaponization of autonomous devices via cyber attack for over a year and a half and got nowhere just emailing politicians or public safety departments.

Money and Power and strong motivators for some. Everything else is secondary. Many human beings like to craftily use marketing/politics to convince people otherwise, but its just a mask of their true intent. Engage your intellect and you can quickly filter through the b.s to one's true motivations/intent... Hint : their actions not their words will be aligned accordingly.

> I've been told almost a dozen times that I should join a military or IR think tank but I don't want to do that. I just want someone else to vet the idea or research and pass it on to policy makers that will actually do something proactively.

Money&Power. This is what dominates the world. The suggestion to join the appropriate groups which recognize this and attempt to mitigate negative effects is well founding.

> What is the responsible disclosure process for ideas and research around AI?

You don't publish it : See Pandora's box. You secure it. You demonstrate its capability and you focus public critical discussion from then on with the creators of it along with a panel of individuals who can probe areas of concern. Public discussion and welcomed inquiry Private and protected research and IP.

So, you make the world aware that something indeed exist because you created it. You open up things for public discussion so people can work through all the issues/concerns/etc with the most knowledge group 'the creators' of it. The tech is privately secured and development goes forward with the public commentary having been received. The End.

This is possibly the best format for things. Other approaches lend themselves to politics, b.s, and manipulation.


Learn to quote properly please. Only include what is relevant to your reply - anyone can access the whole post if they want more context.


This is what always amazed me about the AI safety groups... and mission statements for funding therein : There is nothing to suggest the minds and efforts capable of producing AGI will not have solved this at a fundamental level. Absolutely nothing... and after-all something of high intelligence is within controlled and reasoned states. If someone declared they created AGI and it was behind a door and when that door was opened you saw a robot manically trashing the place, you'd clearly laugh.


Agreed but it is such lines that bring in the $$$ in the valley. At some point, humanity has to cut the fluff and games and get to the fundamentals so we can all rise much higher heights of our collective intelligence. It's very hard to do so when you're constantly bombarded with crafty agenda based language on a day-to-day basis.

> there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. Nor would competitors have any requirement to include OpenAI if and when they were getting close. This is the clear reality. So why do certain human beings pretend otherwise? Why does this pretentious game garner the most funding?Why do human beings spend so much time manipulating and hyping things into over-stated valuations when it ultimately results in wasted resources, time, and potential for the collective?


No offense intended, but I don't think you and I are actually agreeing on anything. I can't really take your comments in this thread seriously.


That's fine. I don't hide who I am. I've discussed things on various accounts/mediums over the years and more increasingly have no concern about restricting my identity. Maybe you can make accurate assumptions as to why this is the case.

In any event, speaking freely and openly in this manner helps put my at ease knowing I at least got the information in its raw form out in the open. Whether or not you believe my framing, arguments, commentary is up to you. Whether or not anyone inquires is up to them. The information was put out there and that clears my conscience in a manner about the times ahead.


They [each individual development group] has power over their own funded development and work.

Anyone working on this problem sincerely values AI safety and its a component of developing and securing the foundations of AGI. An out of control, unpredictable and sloppy system is not intelligent or desired. Such a system would not be considered AGI or an achievement. So, it is natural for any developer to identify issues and bring them under control early in development.

Suggestions that a consortium not centered or understanding of the fundamental development occurring at another entity should have control/influence could possibly serve as the very danger that safety groups claim they are trying to avoid. On this matter, I suggest people stick to the experts/developers/scientist/engineers who've developed such a system and produce a comfortable/non-forceful environment for them to express and detail their safety mechanism.

This is not a conversation for technologist, youtube celebrities, futurist, business types talking up their books, etc. This is a conversation that should ultimately centered on the creators of the technology an the advancing thinking and framing that allowed them to birth the technology. No one with such a mind is aiming for unsafe forms of this technology. It is disingenuous to frame them as such so as to necessitate some external paid body's outside work.


Could you summarize your point more concisely? As written this seems to be a stream of disconnected thoughts that are basically entirely unsubstantiated.


You stated it yourself in post : > there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. > Nor would competitors have any requirement to include OpenAI if and when they were getting close.

In summary : > No one of the intelligence capable of producing AGI is going to publish the full details > People who claim they would have to engage in vague mental gymnastics and mission statements to try to convince people of the illogical. > Those who develop AGI will of course address the safety problem internally to ensure their product is a success > They wont be include outside competitors/consortiums who will of course exploit and use the intellectual property they are exposed to for their competitive advantage

The software industry is the software industry. Intellectual property is paramount. Nothing has changed. Google isn't giving 100% access to their source code or data sets. Microsoft isn't open sourcing all of their code.. etc etc. Suggesting that a new comer should for 'safety' reasons is a manipulative 'think of the kids' FUD argument.


> No one of the intelligence capable of producing AGI is going to publish the full details

This is what I'm talking about when I say "unsubstantiated." Do you recognize that this claim isn't true a priori?


You're welcome to contact me when it occurs. I think I defined who I was in an earlier comment against the advice of someone who claimed it might impact my ability to get capital in the future.


The true nature of AGI research has always been heavy restrictions on the core aspects of the technology. This is where true safety and sensibility is achieved. Those who've stated otherwise or with much verbiage eventually arrive at this obvious state. Therefore, publications up until now under the banner of 'AGI' have largely been insignificant in terms of their capability to achieve the core technological aspects of AGI. No one in their right mind would ever publish significant details about AGI technology. This can easily be proofed by sound logic and reasoning. There was a commercial step to possibly tease others into revealing heavily valuable/powerful technological underpinnings.. It failed, no one took the bait, and no one ever likely will. This has resulted in revised and more mature statements.


> No one in their right mind would ever publish significant details about AGI technology.

Are you sure? I'd publish technical research details about strong AI. I'd probably even open source one with the papers. I think I'm in my right mind; I guess that depends on definition, doesn't it?


Wow; I would strongly recommend you to re-think your position! Think of it in terms of, e.g., gain-of-function research in virology (cf. [1]).

[1] https://www.ncbi.nlm.nih.gov/books/NBK285579/


Indeed, no one with a mind capable of grasping the foundations of AGI would take such an intellectually incompatible step of publishing the full details of how to construct it. Such an unaware mind would simply fall short long before gaining a glimpse of the underpinnings.

In the first moments of realization, one would more likely be scared into silence for some time. Upon gaining their bearings , they'd probably be drawn into even more silence and careful minded reflections on the implications and assessment on how to more publicly move forward. Thanks for the parallel framing in yet another powerful research field gone35.


I'm sorry, I'm not following. Are you saying publishing novel research about strong AI is analogous to releasing a virus, or not taking antibiotics for their full cycle?


No, not quite. I strongly suggest you familiarize yourself with the gain-of-function bioethics literature and recent debates, to get a better sense of what I'm trying to convey.


Why don't you just summarize your actual point or at least provide further guidance? You literally posted a link without any further clarification about its relevance.

As it stands, you're not giving me any incentive to "strongly reconsider" my position.


[flagged]


I sound uncharitable because you posted a (from my perspectively, completely random) link to a research article in a different field and implored me to change my thinking, without any other commentary. That's not a substantive addition to the conversation, because on my end I have no idea whether to take your link seriously and how much time or effort to invest in learning about it.

After reading it for two minutes, it's not obvious how to take a productive insight about artificial intelligence from what seems to be an article about mutations. I offered my sincere first thought about what you might have meant and asked for further clarification, and you shot that down without any further clarification.

Now after I've twice asked you for clarification and you haven't provided it, you're telling me you're wasting your time. Do you see how this is unhelpful? It's borderline Kafkaesque.


He's probably suggesting that in Virology it's generally frowned upon to publish research on how to for instance make Smallpox airborne and as virulent as the Flu. So it could be wise to leave out the exact details on how to make an AI recursively self improving when publishing. If you've spent time in Research many groups already do this by leaving out small but critical details that make replication difficult or impossible. This is across fields not just CS/AI


*She ---but yeah, as a first-order approximation, that's in the vicinity of it. Thanks!


Ok, so this would lead to restricting initial access to strong AI to a few well funded corporate and government groups.

Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.


The exact details on how to make a Thermonuclear weapon are classified, do you think that is done so improperly?


Are you comparing building a nuclear bomb to running a piece of code?


Well, it depends on the piece of code, doesn't it? How many people can actually run Google's search engine on their own hardware? Assuming they had access to the code.


See my response above.


We are talking about an actual AGI right which certainly has the potential to do damage comparable to Nuclear Weapons.


Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.

True, but perhaps AGI would require substantial, non-trivial supercomputing resources as well...


Let's say you need 1000 of p3.16xlarge instances to run it (that's 8k of V100 GPUs), that's $25,000/hour. So the launch price is within reach of most US programmers. After launch, a true AGI can probably find a way to stay "alive".


It also depends on your capability to discover the underpinnings of Strong AI... Certain mindsets preclude one from doing so. If you truly understood the underpinnings of StrongAI and weren't moved to heavily restrict publicized details, I'd strongly question if what you had discovered/came to understand truly were 'underpinnings' of the technology.

As such, for those quick to publish or say they would publish and share details, one is able to quickly ascertain the possible strength of what they have discovered or feel they have the possibility of discovering. That being said, it's interesting that several proponents of 'everyone share what they find' have moved to more maturely state that 'restrictions may of apply'.. Sensibly signaling that they indeed wouldn't reveal certain details upon discovering their true nature and capability for obvious reasons (a danger to society to allow such power/capability to be detailed and therefore used in negative ways).

Human history has myriad examples of powerful technology being misused and abused. The current age of disinformation is one of our more modern ones. The way in which social media has been weaponized is yet another. Weak AI has already been used for destructive means and in manipulative manners for profit. A good deal of unsafe end products which utilize Weak AI are already fielded. The fielding of which made possibly by deep pockets manipulating policy makers and regulation.

A sensible mind capable of producing Strong AI will have observed and digested these clearly visible truths and it would move them to restrict access and publications of their works. Doing otherwise would highlight a level of ignorance, immaturity, and admission of truth when it comes to grasping the current state of humanity. It is this which would preclude one from grasping the nature and foundations of strongAI in the first place.

[Nature's lockout/safety mechanism for such a stage in Human development/capability]


This is performance art, right? Your comments in this thread strain the limits of credulity. Do you actually believe there is an intellectual barrier that will helpfully ensure only the virtuous are capable of doing artificial intelligence research?


[flagged]


I don't really understand what you mean.


I was referring to the parent of your post.


It could be many things. There is a spectrum of Human Intelligence. There is a spectrum of capability of Artificial Intelligence that can be produced by a spectrum of individual Human Intelligence. Do you believe, given many years of historical evidence and demonstration, that one's own personal limitations don't preclude them from certain discoveries? Do you think that money can solve every problem? Do you think if you throw enough PhDs in a room that you can solve any problem in the world? What is intelligence? What is its fundamental nature especially at higher orders? Does a truly and deeply intelligent individual focus on trivialities? Is money/fame their primary motivator? What would such an individual sacrifice for ultimate answers? What catches their eye? What keeps them up at night? If not virtuous, how do they not self destruct at the higher tiers of capability/intelligence? What keeps them together at the limits of comprehension/understanding? Why don't other paths appeal to them? Why don't they cut short and cash in along incremental achievements? What effect does this have on long term progress? Why/how do they think different then everyone else doing research? What is their failure rate? Why are they okay with possibly researching this problem their whole life and not getting an answer or not making money from their efforts? What then are they capable of doing that others are not? Lots of questions.. Lots of answers.. Ultimately


Weak AI is dangerous because it has no intelligence. It is fundamentally structured as a dumb/blind optimization process. The efforts necessary to proof safety/security for such a system could very well outweigh the amount of development that was needed to bring the technology to bear.

AGI/Real Intelligence are far different animals than Weak AI and would require far less "safety" and policing. Real Intelligence is a phenomenon that exists on a scale of sorts that many never achieve in its higher forms. It is in lower forms that intelligence lends itself to destructive ends via ignorance.

Attack vectors on a formalized Intelligence/AGI system can be severely restricted using very sensible/affordable approaches. The over complication and pinning of this as a theoretical problem centers on a number of people's desires to profit immensely from FUD.

Overall, AGI exists in a functional form today and has been executed in an online environment. It is secured via physically restricted in-band and out-of-band channels.


> Overall, AGI exists in a functional form today and has been executed in an online environment. It is secured via physically restricted in-band and out-of-band channels.

I'm pretty sure this is false.


Check my comment history. I can assure you its true as I will demonstrate in the near future. As for the security, you'd have no ability to penetrate internal aspects of it without physical and detectable access patterns. This is achieved using common sense design methodologies that are already proofed industry standards. Behaving as though securitization is theoretically smacks as a cash grab to me. If you have something valuable that you want to secure, magically you come up with ways to safely secure it.


To be frank, your comment history has all the hallmarks of a crank[1]. Specifically, points 10, 9, 7 and 6, although there's also evidence of 2 and 8. Now I could be wrong, but convincing me of that would take a demonstration, or at least explicitly describing the capabilities of your agi.

[1]: https://www.scottaaronson.com/blog/?p=304


After the constant invitations to check that poster’s comment history, I did, and you are correct in your assessment.

For example, they claim to have invented an AGI themselves. https://news.ycombinator.com/item?id=16461258

It’s unfortunate that in this field it’s possible to write so much before people realise.


Old foundations are meant to be redefined/invalidate by new. - Complexity theory - Computational Theory - Graph Theory Are all subsets of Information theory. They're approaches/frames. New ones can be created that invalidate the established limits imposed by others.

Everything is possible until proven. Given how little attribution is paid to people who break through fundamental aspects of understanding and given how much politics and favoring is played in publications/academic circles, one who doesn't have standing in such circles would be a fool to openly resolve some of the most outstanding and fundamental aspects of the problems that plague them. I've read about and watched a number of individuals with proven track records and contributions to science/technology be marginalized, exploited and written off. I've watch a number of corporations exploit such individuals works w/ no attribution or established recognition beyond a footnote. I've watched the world attempt to suggest such inventions/establishments come via mechanisms and institutions that they do not. So, I know better this time around as to what to do w/ my works.

Just about every person who contributes fundamentally to the world is called a crank at some point it in time. It conveys the huge disconnect the average and even prestigious individual has with reality and/or the attempts they make to reframe it to fit their purpose, narrative, standing..

My comment history has yet to receive any remarks that refute its establishments beyond down votes. It stands alone in this manner as will the foundational establishment of AGI.

http://nautil.us/issue/21/information/the-man-who-tried-to-r...


You comments don't receive any refutation because they make vague unfalsifiable claims.

You claim you have invented an AGI, but won't show anyone.

I say you are making it up. Falsify that.


> Check my comment history. I can assure you its true as I will demonstrate in the near future.

Oh, why didn't you just say that in the first place? Now that I have your assurance I can obviously agree with you that strong AI is a thing that currently exists. I concede to your clear and inarguable expertise and proof on the matter; best of luck with your demonstration!


> I concede to your clear and inarguable expertise and proof on the matter; best of luck with your demonstration!

There's a reason why this technology ultimately 'comes out of nowhere'. It is not that it will come out of nowhere... It will be that those having the capability of developing it who have detailed it to a degree which should yield interesting/questions were largely ignored for the many years of development leading up until it is proven beyond a shadow of a doubt. I relied not on luck but diligence and persistence to seek the answer necessary no matter where they resided. In many people's minds, only millions of dollars of funding, prominent names, and companies can produce the technology. Such people ignore the history of technology that proofs the contrary.

I rely not on name but on sound commentary. AGI could exist and could be functional at this very moment. It could be very safely secured. There's nothing to suggest otherwise beyond the limits of one's own understanding. All of the hand waving, safety propaganda, and doomsday FUD disappears in such a scenario as blink were all still here.


> AGI could exist and could be functional at this very moment. It could be very safely secured. There's nothing to suggest otherwise beyond the limits of one's own understanding.

Wait, is this your argument?

  - AGI could be safely secured given current industry standards
  - Creators would likely not publish their success
  - Therefore AGI currently exists


My framing is that it could exist and the broad majority would be none the wiser. My framing also is that a number of groups/individuals have likely exposed enough to cause people to question as to what stage they are in their development but instead receive the common : yeah sure buddy, let me know when you have a demo. So, ultimately it will indeed 'come out of nowhere' because society and many individuals aren't conditions to or even have their 'hearing' tuned to be aware of its coming. The safety discussion is a mute topic of discussion in this context as its baked into development and intelligence. There's discussion boards all over the internet, TED talks, economic forums, AI panes, etc.. It's all the same song and dance save for the many groups that are of no mention. Even as the same voices and idols of note continue to center on the same fundamental approaches that don't seem to be going anywhere fast, no one listens to groups/individuals thinking different or trying a fundamentally new approach. Hinton and other prominent figures even come to state that the real amazing development will come from someone who scraps everything, starts from scratch and reproaches things in a fundamentally new approach. No media outlets. No funding groups (even though they say they're looking for something amazing/new). No commentators. No lay person looks to see if there are any such people.

In such an environment, AGI could very well already be established and the reason being is that no on by and large is looking for its establishment. The focus instead is on a handful of prominent names that are well capitalized. So, the majority and anyone of such thinking indeed (misses) the event.


Safety has become a convoluted term for pseudo control over unintelligent and unpredictable Weak AI. The safety problem as it is framed in its current state centers on principal ideology for Weak AI and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible. I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.


> I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.

First, you could say the same thing for all AI research at the moment! Grandiosity is perhaps even more common in subcommunities of AI that aren't safety focused.

Aside from grandiosity (either opportunistic or sincere), I don't think there's any sinister motivation.

More importantly, I don't think the safety push is misplaced. Even if the current round of progress on deep (reinforcement) learning stays sufficiently "weak", the safety question for resulting systems is still extremely important. Advanced driver assist/self-driving, advanced manufacturing automation, crime prediction for everything from law enforcement to auto insurance... these are all domains where 1) modern AI algorithms are likely to be deployed in the coming decade, and 2) where some notion of safety or value alignment is an extremely important functional requirement.

> ...and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible

In terms of characterizing current AI safety research as AGI safety research? Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). IMO that's a bit over-optimistic. But I tend to be a pessimist.

> ...principal ideology...

As an aside, I'm not sure what this means.


Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.

What easily breaks this down is the depth and breath of the research effort vs. that of the productization and commercialization effort. As for research, the only thing that is required is a computer, power, an internet connection. Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations.

> More importantly, I don't think the safety push is misplaced. Here's how I saw it some years ago... You can beat your head against the wall and create frankenstein amalgamations of ever evolving puzzle pieces that you will require expensive and highly skilled labor to make sense of with an end product being an overhyped optimization algo with programatic policy/steering/safety mechanisms.. Or you can clearly recognize and admit the possible foundation of it is flawed and start from scratch and work towards What is Intelligence and how to craft it into a computational system the right way. The former gets you millions if not billions of dollars, a career, recognition and a cushy job in the near term but will slowly lock you out from the fundamental stuff in the long term. The later pursuit could possibly result in nothing but if uncovered could change the world including nullifying the need of tons of highly paid labor to do development for it. Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.

> Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). Quite convenient for those cashing in on the low hanging fruit who would like investors to extend their present success into far off horizons.

> As an aside, I'm not sure what this means. It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI. It means : https://www.axios.com/artificial-intelligence-pioneer-says-w... But everyone is convinced they don't have to and can extend/pretend their way into AGI.


I don't think the tenor of your post is very fair.

> Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations... Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.

The rest of my post is a response to this sentiment.

> As for research, the only thing that is required is a computer, power, an internet connection.

All that's necessary for world-shattering mathematics research is a pen and paper. But still, most of the best mathematicians work hard to surround themselves by other brilliant people. Which, in practice, means taking "cushy" positions in the labs/universities/companies where brilliant people tend to congregate.

Maybe most great mathematicians don't purely maximize for income. But then, I doubt OpenAI is paying as well as the hedge funds that would love to slurp up this talent! So people working on safe AI at places like OpenAI cannot be fairly criticized. They're comfortable but clearly value working on interesting problems and are motivated by something other than (or in addition to) pure greed/comfort.

> Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.

So what? None of these motivations necessarily preclude doing good science. Some of those are even strong motivators for great science! The history of science contains a diverse pantheon of personality types. Not every great scientist/mathematician was a lone genius pure in heart. In fact, most were far more pedestrian personalities.

The "pious monk of science" mythology is actively harmful toward young scientists for two reasons.

First, the ethos tends to drive students away from practical problems. Sometimes that's ok, but it's just as often harmful (from a purely scientific perspective).

Second, this mythology has significant personal cost. More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.

> It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI.

Thanks for the clarification!


I think what I have stated is quite fair and established at this point in documented human history... There's no reason to play games and shy away from the truth and reality anymore. This continued games we play with each other via masking our true selves and intentions is what leads to the bulk of suffering and what people claim 'we didn't see coming'. The vast potential of the information age has devolved into a game of disinformation, manipulation, and exploitation and the underpinnings of such were clear to anyone being honest with themselves as it began to set in. The facebook revelations were stated years in advance before we reached this juncture. Academics/Psychologist conducted research/published reports on observations any honest person could make about what the platforms functioned on and what it was doing to society.

> All that is required is pen/paper/computer/internet connection Then why do we play the game of unfounded popularity? Why isn't there are more equal spotlight? Why do the most uninformed on a topic acclaim the most prominent voice? In these groupings you mention are hidden and implied establishments of power/capability. A grouping if PhDs, regardless of their works is considered to be of more valuable than an individual w/ no such ranking but whom has established far more (as shown by history). The forgotten heroes, contributors, etc is a common observation of history. It's not that they're 'forgotten', it's that social psyche choses not to spotlight or highlight them because they dont fit certain molds. An established/name personality asks for funding and gets it regardless of whether or not they have a cohesive plan for achieving something. Convince enough people of a doomsday destructive scenario and you'll get more funding than someone who is trying to honestly create something. Of course, you can then edit mission statements post-funding. What of the lost potential opportunity? What of the current state of academia? > https://www.nature.com/news/young-talented-and-fed-up-scient... > https://www.nature.com/news/let-researchers-try-new-paths-1.... > https://www.nature.com/news/fewer-numbers-better-science-1.2... The articles do get published long after a trend has been operating.. Nothing changes. It takes then someone who truly wants to implement change for the better w/ no other influence or goal in mind to fundamentally change something. This happens time and time again throughout history but institutions and power structures marginalize such occurrences to rebuff and necessitate their standing.

You don't need people in the same physical location in 2018 to conduct collaborative work yet the physical institution model still remains ingrained in people's heads. Money could go further, reach more developers, and provide for more discovery if it was spread out more and centralized in lower cost areas yet the elite circles continue to congregate in the valley.

The Ethos of Type A extroverts being the movers/shakers of the world has been proven to be a lie in recent times. So, what results in fundamental change/discovery isn't a collective of well known individuals in grand institutions. It is indeed the introvert at a lessor known university who publishes a world changing idea and paper who only then becomes a blurred footnote in a more prominent institution and individual's paper. The world does function on populism and fanfare.

> Second, this mythology has significant personal cost. It indeed does. It causes the true innovators and discovers a world of pain and suffering throughout their life as they are crushed underneath the weight of bureaucratic and procedural lies the broader world tells itself to preserve antiquated structures.

> More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.

More Young scientist must be given the chance to pursue REAL research and be empowered to do so. They must be empowered to think different. They must be emboldened to leap frog their predecessors and encouraged to do so w/o becoming some head honcho's footnote. Their contributions must be recognized. They must be funded at a high level w/o bureaucratic nonsense an favoritism. A PhD should not undergo an impoverished hell of subservience to an institution resulting in them subjecting others to nonsensical white papers and over complexities. A lot of things should change that haven't even as prominent publications and figures have themselves admitted : https://www.nature.com/collections/bfgpmvrtjy/

I've walked the halls of academia and industry.. I've seen the threads and publications in which everyone complains about the elusive problems but no one has the will or the desire to be honest about their root causes or commit to the personal sacrifices it will take to see through solutions.

I'll probably have the most negative score on Ycombinator by the end of my commentary in this thread yet will be saying the most truthful things... This is the inverted state of things.

So, Mankind has had a long time to break the loops they seem stuck in. Now is the time for a fundamental leap and jump to that next thing beyond the localized foolishness, lies, disinformation, and games we play with each other.


You are correct in many ways namely on a technical/compatibility level. Having no fundamental understanding of how AGI is structured or operates on a technical level renders most efforts & policies on safety mute. If more efforts were focused on the fundamental underpinnings of AGI and a more broad based funding mechanism was established for those doing so, there would have been the possibility for steering all along development. Having not done so in order to capture lower hanging fruit and funding for oneself now leaves many scrambling to align themselves towards work that will no doubt become unveiled suddenly (as no spotlight or funding) is giving any notice to it in the short/medium term.

Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box


Having not defined what intelligence is and having not declared the nature of General Intelligence, you're sure an aspect of clearly defined weak AI is a significant contribution to 'AGI'... Interesting.

> It's true that AlphaZero's knowledge is unable to be generalized for other systems

Interesting admission.

> This is the first piece of the puzzle of more general AI.

The first piece is generalized intelligence. Architecturally, it looks nothing like Alpha Zero. However, you feel :

> Generalization, I would consider as the second piece of the puzzle to more general AI.

How is that the second piece? It's the piece.

> Generalization may be achieved with potential research in transfer learning, model based RL, symbolic network. But without a stable RL algorithm such as DQN as foundation, generalization has nothing to stand on.

So you're of the belief that current approaches are compatible with and are the underpinning of Artificial General Intelligence while Hinton is convinced one needs to scrap it and start over. Sound advice is being ignored and there is a clearly entrenched decision to continue pushing along w/ iterating weak AI. I came here to test the waters.. The commentary and the K-value feedback I've received so far informs me quite profitably.


Hinton's criticism is very valid, but it's not quite about AlphaGo and its branch of ML. his criticism revolves around supervised learning and back prop that cannot be used to achieve the so called AGI. Because ANN is nothing like our brain's real NN. When Hinton gave the speech back in 2014, NN had a huge explosion of hype, and NN was mostly for supervised learning, which is really only good at classification and regression problems, it cannot make decisions outside of its training.

The famous DeepMind DQN paper (the core of AlphaGo) were published after Hinton's talk. the DQN paper practically opened a new chapter in reinforcement learning field. I am not sure if you are familiar reinforcement learning. RL is learning by trial and error, model-less and largely non-bayesian, similar to how humans learn. Up until AlphaGo, RL field was stuck in a limbo because it was having a very hard time learning non-linear problems (which is the majority of problems in nature)

When I say generalization, I meant generalization of knowledge. Generalization is the second piece of the puzzle because, even as humans, we learn from experienc. After enough examples we began to generalize. Up until DQN came out, we couldn't even effectively learn. It's the equivalent of a human baby with severe memory problem. With deepmind's DQN, we can achieve much more stable learning on non-linear systems, and we can begin to add components such as generalization (such as transfer learning), intuitions (such as intuitive physics), symbolic network.

I am not too sure what you meant by weak AI and general AI. For me, an AI which can learn similar to how humans learn, able to use apply generalized knowledge when facing a brand new problem, independently think and make decisions without human assistance, that's general enough.

Yes, much work needs to be done, but I don't believe this is the wrong direction we are going. Though I'd be glad to be proven wrong and I am very fascinated by this debate, if you would like, we could continue discussing it over email/chat?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: