Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm reposting this comment I made a couple of months ago. He's no John McCarthy, but he was a true pioneer in the commercial applications of AI:

==================================

I don't think that's a very fair assessment of Kurzweil's role in technology.

He was on the ground, getting his hands dirty with the first commercial applications of AI. He made quite a bit of money selling his various companies and technologies, and was awarded the presidential Medal of Technology from Clinton.

As I was growing up, there was a series of "Oh wow!" moments I had, associated with computers and the seemingly sci-fi things they were now capable of.

"Oh wow, computers can read printed documents and recognize the characters!"

"Oh wow, computers can read written text aloud!"

"Oh wow, computers can recognize speech!"

"Oh wow, computer synthesizers can sound just like pianos now!"

I didn't realize until much later that Kurzweil was heavily involved with all of those breakthroughs.



He's also an ACM Fellow, from it's first class - along with people like Knuth, Cerf, Rivest, Codd, etc.


In addition, I'd rank Minsky, Larry Page, Bill Gates, Dean Kamen, Rafael Reif, Tomaso Poggio, Dileep George, and Kurzweil's other supporters as much more qualified to judge the merits of his ideas, than Kurzweil's detractors like Hofstadter, Kevin Kelly, Mitch Kapor, and Gary Marcus. It seems that Hofstadter is the only one of that group who is really qualified to render a verdict.

http://howtocreateamind.com/


To put it another way - if a visionary isn't controversial, she's probably not a visionary.


I believe your summary suffers from selection bias.

I think most people have views that are controversial. Only when one is famous do others hear about those controversial views. Furthermore, just about every famous project has its distractors, leading to controversy.

Take Stephen Hawking as an example. He doesn't believe there was a god who created the universe. That's a controversial view to many. But when an non-famous atheist says exactly the same thing, few take notice, so you don't hear about those people.

Take Alan Kay as another example. He's one of the key people behind OLPC, which given its criticism could be considered controversial. But name any big project which has neither criticism nor controversy.

Ole Kirk Christiansen founded Lego. He was visionary in that he saw a future in plastic bricks as toys for children. What were his controversial views? I have no clue. But he probably had some. Perhaps his motto "Only the Best is Good Enough" is controversial to someone who believes that that second best may be good enough for some cases.


Controversial views are a necessary but not sufficient condition for visionary status.


While I believe that everyone has controversial views. Do you think god exists? Do you think there should be more gun ownership? Or less? Do you think abortion should be banned? Only available for a few cases. Up to the mother to decide? But only until the third trimester?

Should there be public drinking of beer? Public nudity? Public urination? Public displays of affection?

Should women always have their heads covered while in public? What about men? Should we ban male circumcision until the male is old enough to make the decision for himself? Should we have the draft? What about mandatory civil service?

Do you believe in mandatory bussing? Separate but equal? Co-ed schools or sex segregated schools? State income tax or not? Legalized gambling? What if it's only controlled by the state? Should alcohol sales only be done by the state, or can any place sell vodka? What about beer? Should alcohol sales be prohibited within a certain distance of schools? Are exceptions allowed?

Do vaccines cause autism? Was the Earth created less than 10,000 years ago? Can you petition the Lord with prayer? Is the Pope God's representative on Earth, or the anti-Christ? Should non-believers be taxed at a higher rate than believers?

Should I go on? All of these are controversial. If you have views one way or the other, then your views will be controversial at least to some, if not to most. And if you have no views on a topic then that itself can be controversial. As the morbid joke goes, during the height of the Troubles in Ireland: "yes, but are you a Catholic Jew or a Protestant Jew"?

If everyone has controversial views, then of course they are precondition for being a visionary. They are also a precondition for not being a visionary. Name one famous (so I have a chance of knowing something about that person) non-visionary who did not have controversial views.

But first, name a controversial view of the founder of Lego ... who is definitely described as a visionary.


The degree of controversy is obviously diminished when someone has been successful. But I'll take a stab at the last question...

As for controversy with LEGO's founder: 1. Structuring the company around "doing good" instead of profitability and other more "corporate values". Google gets flack for this to this day, and LEGO almost went broke following this tenet until they revamped the corporate structure to follow profitability instead. 2. Use of plastics instead of wood, deviating from the company's original product base. Surely, that's what paved the way for LEGO, but I'm sure it was somewhat controversial switch in some circles, not least of which were carpenters and some employees. 3. LEGO's many legal battles and use of patents might be construed as controversial in some circles.

More to the point of the OP, it's hard to be a visionary if your view does not in any way shape or form deviate from the norm. Deviation from the norm is what sets the visionary apart, hence it's some times said that visionaries are controversial, because this deviation from the norm more often than not causes controversy in the areas in which they are deviating.


"Doing good" is a standard crafter/engineer approach, so it's not like that alone is visionary. My Dad's phrase "do your best or don't do it at all." Was he a visionary?

He wanted the whole world to be Christian, and went to Ecuador to work as a missionary. Did that make him and my Mom (and my dad's parents (also missionaries) and various other of my extended family) visionaries? What about all of the Mormons who do their two years of missionary work?

Consider also all the people who were visionary, tried something, and failed. In part, perhaps, because their vision wasn't tenable. You don't hear about all of those visionary chefs who had a new idea for a restaurant, only to find out that it wasn't profitable.

Add all those up, and there are a lot of visionaries in the world. Enough that the non-visionaries are the exception.


Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.

I'm sure he is a smart guy, but I think we have put him on a pedestal when he probably is not as remarkable as we want him to be.


I don't mean to pick on you (and I certainly didn't downvote you), but you seem like a posterboy for just how easy it is to take inventions and innovation for granted after the fact.

I find it instructive to occassionally go to Youtube and load up commericals for Windows 95, 3.1, the first Mac, etc., or even to dust off and boot up an old computer I haven't touched for decades. Not to get too pretentious, but it's a bit like Proust writing about memories of his childhood coming flooding back to him just from the smell of a cake he ate as a child.

When you really make a concerted effort to remember just how primitive previous generations of computing were, I think it puts Kurzweil's predictions and accomplishments in a much more impressive context.

This was the state of the art PC back when Ray was forming his first companies: https://www.youtube.com/watch?v=vAhp_LzvSWk

I posted some other thoughts about Ray's track record awhile back:

==========================================

I read his predictions for 2009 only a couple of years before they were supposed to come about (which he wrote in the late 90s), and many seemed kind of far fetched - and then all of a sudden the iPhone, iPad, Google self-driving car, Siri, Google Glass, and Watson come out, and he's pretty much batting a thousand.

Some of those predictions were a year or two late, in 2010 or 2011, but do a couple of years really matter in the grand scheme of things?

Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then.

Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.

Even if a few of his more outlandish predictions like immortality are a few decades - or even generations - off, I think the road map of technological progress he outlines seems pretty inevitable, yet still awe inspiring.


"Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary"

There have been predictions of self-driving cars for more than half a century. It's in Disney's "Magic Highway" from 1958, for example. There was an episode of Nova from the 1980s showing CMU's work in making a self-driving van.

Researching now, Wikipedia claims: "In 1995, Dickmanns´ re-engineered autonomous S-Class Mercedes-Benz took a 1600 km trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 175 km/h on the German Autobahn, with a mean time between human interventions of 9 km, or 95% autonomous driving. Again it drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without human intervention."

You'll note that 1995 is before "the late 90s." It's not much of a jump to think that a working research system of 1995 could be turned into something production ready within 20 years. And you say "a year or two late", but how have you decided that something passes the test?

For example, Google Glass is the continuation of decades of research in augmented reality displays going back to the 1960s. I read about some of the research in the 1993 Communications of the ACM "Special issue on computer augmented environments."

Gibson said "The future is already here — it's just not very evenly distributed." I look at your statement of batting a thousand and can't help but wonder if that's because Kurzweil was batting a thousand when the books was written. It's no special trick to say that neat research projects of now will be commercial products in a decade or two.

Here's the list of 15 predictions for 2009 from "The Age of Spiritual Machines (1999)", copied from Wikipedia and with my commentary:

* Most books will be read on screens rather than paper -- still hasn't happened. In terms of published books, a Sept. 2012 article says "The overall growth of 89.1 per cent in digital sales went from from £77m to £145m, while physical book sales fell from £985m to £982m - and 3.8 per cent by volume from £260m to £251m." I'm using sales as a proxy for reads, and while e-books are generally cheaper than physical ones, there's a huge number of physical used books, and library books, which aren't on this list.

* Most text will be created using speech recognition technology. -- way entirely wrong (there's goes your 'batting 1000')

* Intelligent roads and driverless cars will be in use, mostly on highways. -- See above. This is little more common now than it was when the prediction was made.

* People use personal computers the size of rings, pins, credit cards and books. -- The "ring" must surely be an allusion to the JavaRing, which Jakob Nielsen had, and talked about, in 1998, so in that respect, these already existed when Kurzweil made the prediction. Tandy sold pocket computers during the 1980s. These were calculator-sized portable computers smaller than a book, and they even ran BASIC. So this prediction was true when it was made.

* Personal worn computers provide monitoring of body functions, automated identity and directions for navigation. -- Again, this was true when it was made. The JavaRing would do automated identity. The Benefon Esc! was the first "mobile phone and GPS navigator integrated in one product", and it came out in late 1999.

* Cables are disappearing. Computer peripheries use wireless communication. -- I'm mixed about this. I look around and see several USB cables and power chargers. Few wire their house for ethernet these days, but some do for gigabit. Wi-fi is a great thing, but the term Wi-Fi was "first used commercially in August 1999", so it's not like it was an amazing prediction. There are bluetooth mice and other peripherals, but there was also infra-red versions of the same a decade previous.

* People can talk to their computer to give commands. -- You mention Siri, but Macs have had built-in speech control since the 1990s, with PlainTalk. Looking now, it was first added in 1993, and is on every OS X installation. So this capability already existed when the prediction was made. That's to say nothing of assistive technologies like Dragon which did supported text commands in the 1990s.

* Computer displays built into eyeglasses for augmented reality are used. -- "are used" is such a wishy-washy term. Steve Mann has been using wearable computers (the EyeTap) since at least 1981. Originally it was quite large. By the late 1990s it was eyeglasses and a small device on the belt. It's no surprise that in 10 years there would be at least one person - Steve Mann - using a system where the computer was built into the eyeglasses. Which he does. A better prediction would have been "are used by over 100,000 people."

* Computers can recognize their owner's face from a picture or video. -- What's this supposed to mean? There was computer facial recognition already when the prediction was made.

* Three-dimensional chips are commonly used. -- No. Well, perhaps, depending on your definition of "3D." Says Wikipedia, "The semiconductor industry is pursuing this promising technology in many different forms, but it is not yet widely used; consequently, the definition is still somewhat fluid."

* Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space. -- No.

* A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. PS3s came out in 2006 with a peak performance estimated at 2 teraflops, giving the hardware industry several years of buffer to achieve Kurzweil's goal.

* There is increasing interest in massively parallel neural nets, genetic algorithms and other forms of "chaotic" or complexity theory computing. -- Meh? The late 1990s, early 2000s were a hey-day for that field. Now it's quieted down. I know 'complexity'-based companies in town that went bust after the dot-com collapse cut out their funding.

* Research has been initiated on reverse engineering the brain through both destructive and non-invasive scans. -- Was already being done long before then, so I don't know what "initiated" means.

* Autonomous nanoengineered machines have been demonstrated and include their own computational controls. -- Ah-ha-ha-ha! Yes, Drexler's dream of a nanotech world. Hasn't happened. Still a long way from happening.

So several of these outright did not happen. Many of the rest were already true when they were made, so weren't really predictions. How do you draw the conclusion that these are impressive for their insight into what the future would bring?



First of all, I would strongly encourage anyone who is interested to check out Kurzweil's 2009 predictions in his 1999 book Age of Spiritual Machines, rather than this Wikipedia synopsis. It puts his predictions in a much more accurate context. You can view much of it here: http://books.google.com/books?id=ldAGcyh0bkUC&pg=PA789&#...

Kurzweil also does a reasonably unbiased job of grading his own predictions here: http://www.kurzweilai.net/images/How-My-Predictions-Are-Fari...

Quite a few of your statements relate to technological adoption vs. technological capability, such as everyday use of speech recognition and ebooks. I clearly stated that Kurzweil is not perfect at predicting what technologies will catch on with consumers and organizations, nor is anyone for that matter. To me, and to most of the people reading this, the most interesting aspect of Kurzweil's predictions is always what technological capabilities will be possible, rather than the rate of technological adoption.

Some of your other statements conflate science fiction with what Kurzweil does: "There have been predictions of self-driving cars for more than half a century. It's in Disney's 'Magic Highway' from 1958, for example." Similarly, most of your other points attempt to make the case that because nascent research projects existed, all of his predictions should have been readily apparent. I'm sorry, but this is pretty much the same hindsight bias displayed by gavanwoolery and Kurzweil's worst critics. Basically, Kurzweil's predictions are incredulously absurd to you, until they become blindingly obvious.

You can point to obscure German R&D projects all you want (and who knows how advanced that prototype was, or how controlled the tests were), but I was blown away by the Google self driving car, as were most of the people on HN based on the enthusiasm it received here. I thought it would take at least a decade or so before people took it for granted, buy you've set a Wow-to-Meh record in under a year.

Once again, I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer. Zip drives, massive boot times, 5 1/4 floppy disks, EGA, 20mb external hard drives the size of a shoe box, 30 minute downloads for a single mp3 file, $2,000 brick phones, jpgs loading up one line of pixels a second, etc.

To be clear, I'm not a Kurzweil fanboy. He's not some omniscient oracle, bringing down the future on stone tablets from the mount. What he is is a meticulous, thoughtful, and voracious researcher of technological R&D and trends, and a reasonably competent communicator of his findings. I'm very familiar with the track records of others who try and pull off a similar feat, and he's not perfect, but he's far and away the best barometer out there for the macro trends of the tech industry. If his findings were so obvious, why is everyone else so miserable at it? Furthermore, his 1999 book was greeted with the same skepticism and incredulity that all of his later books were.

For some of your other points, I've included links below:

Research has been initiated on reverse engineering the brain - Kurzweil was clearly talking about an undertaking like the Blue Brain project. Henry Markram, the head of the project, is predicting that around 2020 they will have reverse engineered and simulated the human brain down to the molecular level:

http://www.ted.com/talks/henry_markram_supercomputing_the_br...

http://en.wikipedia.org/wiki/Blue_Brain_Project

A 1000 dollar pc can perform about a trillion calculations per second. -- This happened. This is also an extension based on Moore's law and so in some sense predicted a decade previous. Pretty much all of Kurzweil's predictions boil down to Moore's law, which he would be the first to admit. I'm not sure what you're trying to say.

Autonomous nanoengineered machines have been demonstrated and include their own computational controls. - If you read his prediction in context, he's clearly talking about very primitive and experimental efforts in the lab, which we are certainly closing in on:

http://www.kurzweilai.net/automated-drug-design-using-synthe...

http://en.wikipedia.org/wiki/Nadrian_Seeman - If you've been following any of Nadrian Seeman's work on nanobots constructed with DNA, Kurzweil's predictions seem pretty close

http://wyss.harvard.edu/viewpressrelease/101/researchers-cre...

http://www.kurzweilai.net/a-step-toward-creating-a-bio-robot...

http://www.aalto.fi/en/current//news/view/2012-10-18/

Three-dimensional chips are commonly used. - I guess you could quibble over them being a few years late:

http://www.bbc.co.uk/news/technology-17785464

http://www.pcmag.com/article2/0,2817,2384897,00.asp


Okay, I looked at some of the "reasonably unbiased job of grading his own predictions." I'll pick one, for lack of interest in expanding upon everything.

He writes: “Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry.” When I wrote this prediction in the 1990s, portable computers were large heavy devices carried under your arm.

But I gave two specific counter-examples. The JavaRing from 1998 is a personal computer in a ring, and the TRS-80 pocket computer is a book-sized personal computer from the 1980s, which included BASIC. So the first part, "are available", was true already in 1999. Because "are available" can mean anything from a handful to being in everyone's hand.

Kurzweil then redefines or expands what "personal computer" means, so that modern iPods and smart phones are included. Except that with that widened definition, the cell phone and the beeper are two personal computers which many already had in 1999, yet were not "large heavy devices carried under your arm", and which some used as fashion statements. I considered then rejected the argument that a cell phone which isn't a smart phone doesn't count as a personal computer, because he says that computers in hearing aids and health monitors woven into undergarments are also personal computers, so I see no logic for excluding 1990s-era non-smart phones which are more powerful and capable than a modern hearing aid.

There were something like 750 million cell phone subscribers in the world in 2000, each corresponding to a "personal computer" by this expanded definition of personal computer. By this expanded definition, the 100 million Nintendo Game Boys sold in the 1990s are also personal computers, and the Tamagotchi and other virtual pets of the same era are not only personal computers, but also used as jewelry similar to how some might use an iPod now.

He can't have it both ways. Either a cell phone (and Game Boy and Tamagotchi) from 1999 is a personal computer or a hearing aid from now is not. And if the cell phone, Game Boy, etc. count as personal computers, then they were already "common" by 1999.

Of course, what does "common" mean? In the 1999 Python conference presentation which included the phrase "batteries included", http://www.cl.cam.ac.uk/~fms27/ipc7/ipc7-slides.pdf , the presenter points out that a "regular human being" carries a cell phone "spontaneously." I bought by own cell phone by 1999, and I was about the middle of the pack. That's common. (Compare that to the Wikipedia article on "Three-dimensional integrated circuit" which comments "it is not yet widely used." Just what does 'common' mean?)

Ha-ha! And those slides show that I had forgotten about the whole computer "smartwatch" field, including a programmable Z-80 computer in the 1980s and a smartwatch/cellphone "watch phone" by 1999!

I therefore conclude, without a doubt, that the reason why the prediction that "Personal computers are available in a wide range of sizes and shapes, ... " was true by 2009 was because it was already true in 1999.

As regards the "obscure German R&D project", that's not my point. A Nova watching geek of the 1980s would have seen the episode about the autonomous car project at CMU. And Kurzweil himself says that the prediction was wrong because he predicted 10 years and when he should have said 20 years. But my comment was responding to the enthusiasm of rpm4321 who wrote "Predicting that self-driving cars would occur in ten years in the late 90s is pretty extraordinary, especially if you go to youtube and load up a commercial for Windows 98 and get a flashback of how primitive the tech environment actually was back then."

I don't understand that enthusiasm when 1) the prediction is acknowledged as being wrong, 2) autonomous cars already existed using the 'primitive tech environment' of the 1990s, and 3) the general prediction that it would happen, and was more than science fiction, was widely accepted, at least among those who followed the popular science press.

"I strongly encourage you to fire up Youtube or dust off an old computer, and really try and remember exactly what the tech environment was really like in previous decades for the average consumer"

I started working with computers in 1983. I have rather vivid memories still of using 1200 baud modems, TV screens as monitors, and cassette tapes for data storage. I even wrote code using printer-based teletypes and not glass ttys. My complaint here is that comments like "primitive" denigrate the excellent research which was already done by 1999, and the effusive admiration for the predictions of 1999 diminish the extent to which those predictions were already true when they were made.


Man, those are actually some pretty tame predictions, especially if most of them were already in some form of production. I guess the future ain't all it's cracked up to be.


>Kurzweil seems to always get technological capabilities right. Where he sometimes falls flat is in technological adoption - how actual consumers are willing to interact with technology, especially where bureaucracies are involved- see his predictions on the adoption of elearning in the classroom, or using speech recognition as an interface in an office environment.

This is a problem common to other AI pioneers, including Norvig.


TLDR, he exhibits hindsight bias


That's the DR bit.

The TL-Did-actually-read-and-am-summarising-this-for-people-who-won't is exactly the opposite of what you've just stated.

He exhibits foresight bias.


I'm not sure whether you two are referring to the same 'he'.


Text to speech is easy, right? You just get a bunch of white noise and squirt it out a speaker with a bit of envelope shaping. Short rapid burst is a 'tuh'. Or maybe a 'kuh'. Or a 'puh' or 'duh' or 'buh'. More gentle with a bit of sustain is a 'luh' or 'muh' sound. I had a speech synth on a CP/M computer that did this. You might understand what was being said, if you knew what was being said.

People had a lists of phonemes and improved those.

Then people experimented with different waveforms.

Here's a collection of different voices. (Poor quality sound, unfortunately.) (http://www.youtube.com/watch?v=aFQOYBNAMHg)

Why did all those people take so long to make the jump to biphones, to smoothing out the joins between individual phonemes?

You had the Japanese with their '5th generation' research who were physically modelling the human mouth, tongue, and larynx, and blowing air through it. (You don't hear much about the Japanese 5th generation stuff nowadays. I'd be interested if there's a list of things that come from that research anywhere.)

Saying "talking computers" is easy; doing it is tricky.

EDIT: (http://www.japan-101.com/business/fifth_generation_computer....)

> By any measure the project was an abject failure. At the end of the ten year period they had burned through over 50 billion yen and the program was terminated without having met its goals. The workstations had no appeal in a market where single-CPU systems could outrun them, the software systems never worked, and the entire concept was then made obsolete by the internet.


This has been posted before and it goes a long way into explaining why your reaction may not be the more appropriate:

http://lesswrong.com/lw/im/hindsight_devalues_science/


I'm probably preaching to the wrong crowd here, but I'm not talking in hindsight-bias. I mean, even in the early days others were making similar predictions - not all of them were as vocal though. Also, I'm pretty sure he did not single-handedly invent all the listed things before anyone else had even thought of it -- as is the case with any invention, you probably have a few thousand people thinking about the idea or researching it before one person steps forward with a good implementation -- and I'm sure Kurzweil found inspiration in his colleagues work, and there were probably earlier implementations of his ideas.

This does not mean he was not smart, I am simply making a general truth: there are few "original" inventions, and many "obvious" inventions. If you do not think these things were obvious, how long do you think it would take for the next implementation to appear? I would bet 1-3 years at most. No single human is that extraordinary -- some just work harder than others at becoming visible.


>Not to be a downer, but text-to-speech, speech recognition, music synthesis, and so forth are all fairly obvious applications of computer science that anyone could have pioneered without being a genius. Likewise, predicting self-driving cars is nothing science fiction has not already done.

What does "predicting" a thing has with actually IMPLEMENTING it? Here, I predict "1000 days runtime per charge laptop batteries". Should I get a patent for this "prediction"?

No, text-to-speech, speech recognition and synthesis are not "fairly obvious applications of computer science that anyone could have pioneered". And even if it was so, to be involved in the pioneering of ALL three takes some kind of genius.

Not only that, but all three fields are quite open today, and far from complete. Speech recognition in particular is extremely limited even today.

Plus, you'd be surprised how many "anyones" scientists failed to pioneer such (or even more) "obvious applications". Heck, the Incas didn't even have wheels.

(That said, I don't consider Kurzweill's current ideas re: Singularity and "immortality" impressive. He sounds more like the archetypical rich guy (from the Pharaohs to Howard Hughes) trying to cheat death (which is a valid pursuit, I guess) than a scientist).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: