Isn't making a robot that looks like a human to replace a human's job a bit like making a mechanical horse to fulfill our transportation needs?
Having worked with robotics for years I can say the amount of setup work that goes into installing fully-functional hardware and software and getting a robotic process running smoothly is enormous. The basic idea is that everything, the robot, all the hardware, all the firmware/software, the end-effector, the workpieces, the sensors, etc. is rigidly-defined and over-spec'd so that with all the tolerance stackup and after all the integration and process debugging work you set it and DON'T CHANGE IT for as long as possible. The chaos that ensues from one little component changing it's behavior can be enormous.
The notion of "smart" robots that you can just slap down or that can just handle all sorts of unknowns and adjust themselves to changes to me always seemed like a really, really big challenge, maybe not as challenging as a driverless car but definitely more of a "general AI" problem.
I'm sure someone has coined the term but there must be some kind of "uncanny valley" of intelligence: a little intelligence (e.g. the PID controllers that actually run robots) is great, a lot of intelligence (fully-blown general AI) is great (if you can get it), but what's in the middle may not be worth the while. Getting the answer correctly 99% of the time doesn't work if you need 99.9% success rate.
From an investment standpoint I would be looking for companies with a REALLY specific well-defined problem that "medium AI" could solve rather than someone who's claiming to take medium AI and apply it vaguely/generally.
I guess that's my take away from this: work on specifying the problem before you work on the solution.
I see current AI to be like donkeys, and the middling AI you speak of as chimps. There's a reason we domesticated donkeys and not chimps.
The autopilot feature of Teslas is a lot like a donkey. It mostly handles itself but is stupid and needs a lot of monitoring. Using autopilot feels a lot like sitting on a cart and pulling the ropes on the donkey every once in a while.
"Medium AI" is definitely useful, as long as you have the correct interfacing and apply it to the correct problems.
Obviously you can't just slap it onto an AI-complete problem and have medium AI perform well enough to ship.
We are probably roughly in agreement here, but I disagree that in reality there's a significant uncanny valley effect. It's just hard for those not in the field to intuit about the capabilities of medium AI.
Having worked with robotics for years I can say the amount of setup work that goes into installing fully-functional hardware and software and getting a robotic process running smoothly is enormous. The basic idea is that everything, the robot, all the hardware, all the firmware/software, the end-effector, the workpieces, the sensors, etc. is rigidly-defined and over-spec'd so that with all the tolerance stackup and after all the integration and process debugging work you set it and DON'T CHANGE IT for as long as possible. The chaos that ensues from one little component changing it's behavior can be enormous.
The notion of "smart" robots that you can just slap down or that can just handle all sorts of unknowns and adjust themselves to changes to me always seemed like a really, really big challenge, maybe not as challenging as a driverless car but definitely more of a "general AI" problem.
I'm sure someone has coined the term but there must be some kind of "uncanny valley" of intelligence: a little intelligence (e.g. the PID controllers that actually run robots) is great, a lot of intelligence (fully-blown general AI) is great (if you can get it), but what's in the middle may not be worth the while. Getting the answer correctly 99% of the time doesn't work if you need 99.9% success rate.
From an investment standpoint I would be looking for companies with a REALLY specific well-defined problem that "medium AI" could solve rather than someone who's claiming to take medium AI and apply it vaguely/generally.
I guess that's my take away from this: work on specifying the problem before you work on the solution.