Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Google Wants to Sell Its Robots: Reality Is Hard (bloomberg.com)
168 points by antman on March 19, 2016 | hide | past | favorite | 118 comments


Out of all the reasons I've read in the media, I really only see one legitimate (non-FUD) possible reason for Google selling Boston Dynamics: they didn't integrate into the company well. Not building the products Google envisions, and not working with Google's other robotic divisions, both seem like really glaring issues that would motivate splitting off Boston Dynamics.

Google has many divisions working on unprofitable long term research projects, so clearly that's not the reason. I don't think it's fear of humanoid AI - this doesn't seem like something the founders of Google would fear.


This. Another article that was more in depth said that the other robotics companies Google bought were being integrated into other divisions. Boston Dynamics is the only one being sold off. For whatever reasons they didn't mesh well with the rest of Google.


It is possible that Boston Dynamics is still very tight with military contractors and technologies which may prevent separation. Google made it clear it does not want these contracts and associations.

What if a chip/software chunk in Atlas was made by Raytheon and any modification needs to shared back? Or access is not even possible.


> Google made it clear it does not want these contracts and associations.

They did? Google still does DoD contracts for supplying Google Maps among other things. I figured they didn't care.


Just my personal opinion, but I think Boston dynamics robots show a lot of potential in the near future for military uses, however seeing big dogs in the battle field could generate pretty bad PR for Google, a company whose motto is, still, do no evil.


I'm inclined to agree. Google probably does not want to be the company that creates Terminators for the Pentagon. Watching the last BD demo on YouTube, watching that humanoid stomp through the forest, I couldn't help but joke how terrifying it would be to see one of those things holding a machine gun.


> I couldn't help but joke how terrifying it would be to see one of those things holding a machine gun.

Funny thing is I would be far less terrified of it than a human if it were holding a machine gun because its human-like movement is so incredibly slow I would feel like I actually had a chance (even if I didn't).

However, if they embedded the machine gun into the bot itself (which I think is far more likely) then you could have instant targeting which shifts the terrifying back to the bot. It would also not look like it even has a gun depending on how they incorporated it and it could fire in multiple directions at once.

Ah, the future of war. So efficient and terrifying.


There are already automated "click to kill" turrets. They have a thermal camera and a gun, do automated target detection and automated killing. It's only due to customer demand that they have a human in the loop clicking the kill button. Having been within the firing range of one of those, it's a really terrifying feeling.


Which company makes these?


I think Samsung has a division that makes them for the ROK military (not being facetious, but possibly being incorrect)



> Google, a company whose motto is, still, do no evil.

Seems to have been changed to "Do the right thing" to some extent when they changed to Alphabet [1].

Interestingly enough, I think that restructure also helps obfuscate where Google products are used. It's unlikely that any military robots would be Google branded. Instead they would probably fall under a relatively obscure Alphabet company that the average person would never relate back to Google.

---

[1] http://www.engadget.com/2015/10/02/alphabet-do-the-right-thi...


> a company whose motto is, do no evil

Actually, it was 'don't be evil'. They've emphasized the difference in the past; but in any case, Alphabet replaced it with "Do the Right Thing"[1].

But yeah; Alphabet is probably unlikely to want to be associated with the US military.

[1] https://en.wikipedia.org/wiki/Don%27t_be_evil


They're already associated with the US government, it makes little sense for them to want to distance themselves from the military.


That's not their motto anymore, now it's "Do the right thing" which is more ambiguous in my opinion...

https://en.wikipedia.org/wiki/Don%27t_be_evil#The_End_of_.22...


When they bought BD, they said they would "honor existing military contracts,” but that it did not plan to become “a military contractor on its own.

http://www.nytimes.com/2013/12/14/technology/google-adds-to-...

http://www.pcworld.com/article/2456240/under-google-robot-ma...

http://www.extremetech.com/extreme/185570-google-finally-pro...


Yeh I agree. From the title, "Reality is hard", It seems like the media is portraying this as a failure for the advancement of robotics. Whereas I suspect google is becoming more confident about how robotics and AI will be best implemented and Boston Dynamics is failing to follow that vision.

> executives discussed the viability of AI techniques like teaching robots to do physical tasks, and how the Boston Dynamics group needed to collaborate more with other Google teams.


Seems to me Boston Dynamics was at the forefront of AI/robotics progress, too. Reality might be hard, but Boston Dynamics was definitely going somewhere.


Was it really, though? What I've read seems to indicate a lot of the algorithms were focused more on the typical top-down planning/searching paradigm, rather than the approach that is in vogue in AI today, which is learning algorithms for robots to learn behaviors.

I could easily see a scenario in which researchers at Google HQ tried to shift the research focus of Boston Dynamics and were told off by the researchers at Boston, though this is just wild speculation.


IMO you need both traditional top/down searching and planning as well as deep neural networks. AlphaGo is a good example of that kind of mix.

Furthermore, the mechanical engineering problems in robotics are important too. We currently have relatively bad robot platforms. Boston Dynamics has made important progress in this area as well.


IMO, Boston Dynamics is working on hard problems that have small mid term value. If your options is 100 billion in R&D or use wheels, the obvious choice 98% of the time is to use wheels.

Wheels are quiet, low matence, lower energy, well understood etc. Stairs seem like a big deal but roomba demonstrates if there cheap you can just have another robot upstairs.


I bought the primary focus for Boston Dynamics was military applications. I can see Google wanting to distance themselves from hat industry.


If that was the case then why buy them in the first place?

Change of heart?


maybe, or maybe this, Dec 2015, when the US military scrapped plans to invest in Big Dog http://www.headlines-news.com/2015/12/30/698664/defence-chie...


Irrational exuberance and/or lack of proper due diligence as to the extent of the ties. It must be great to have so much spare cash that you can do a transaction like this the way I might buy a TV and sell it on ebay if doesn't do what I thought it would.


Access to all of BDs secrets.


Agreed. I'm just glad some of the knowledge and know-how was adsorbed by Google before BD ends up as part of one of the military behemoths.


Yes, the whole smokescreen about "All our divisions need to have a near-term profitability plan" is just completely unbelievable. For one thing, Alphabet can certainly afford a few pie-in-the-sky long shot projects, and for another, they don't seem to have a problem funding Go AIs with an even farther payoff horizon.

My guess is that the Boston Dynamics folks wanted to work on bipedal robots, and the execs back on the West Coast were worried about PR fallout.

That, and it's not that easy to sell "Don't be evil" as your company motto when you're building unstoppable steampunk automatons for the DoD.


Erm, AlphaGo's payoff in technical terms is maybe far off, yes, but the PR was huge!


That's sort of my point... the PR they got from the recent Boston Dynamics video was significantly less favorable. So DeepMind stays in the family, while Boston Dynamics is out.

According to some sources, they paid around $400 million for DeepMind. Boston Dynamics only cost them a bit more than that. They could have kept both of them running indefinitely.


It's pretty easy to imagine conflict arising from how Google wants the software to evolve versus how BD has been doing it up to now. And that there could be huge resistance inside BD to disregarding their IPR or even practical/talent implications that were not obvious at the start.


Remember Google Glass. Google is worried about public rejection of uncanny valley scary robotic/AI. That could undermine acceptance of its AI projects.


On the other hand, it might help their efforts of making people skeptical of the big bad Singularity in which AIs make us their pets. But maybe commercial interests superceded their Singularity misgivings.


That's one good reason. Another is that walking isn't as valuable as it is amazing. Walking robots were one of the first dreams ARPA was created to pursue, because then you could make walking tanks that could walk through forests in Europe and fight Soviet Bloc tanks.

That mission isn't as important now, and it's hard to come up with a market for walking robots.


It could be because they didn't see much value in their technology...


"To develop robots, you have two options: You can either simulate an environment and robot with software and hope the results are accurate enough that you can load it into a machine and watch it walk."

This is much harder than you'd think.

Here's a fun story from 15 years ago when a friend of mine tried to do some simple AI:

His goal was to have a humanoid shape created is software learn how to walk using simple AI. The idea was that it would obey some basic laws (gravity, the limits of its joints, etc.), do something random, check whether or not it was closer to the goal of walking, tweak its parameters, and iterate. The chosen goal was not to fall over.

He set up the program, let it run over the weekend and let it do millions of tries. Hopefully when he came back to the office it would have learned how to walk, or at least stand up without falling.

His disappointment was huge when he came into the office: The simulated robot was sitting down with its knees bent, thus having achieved the goal of not falling over.


I've heard a similar story about small self driving cars.

The cars would drive around using a random algorithm, then copy and tweak the algorithm of the longest running car when they crashed. The researcher left the room to let the cars work, only to come back and find that each of the cars had deduced that the perfect solution was to remain perfectly still. After all, if they didn't move, they couldn't crash!


The only winning move is not to play - WOPR A.k.a. Joshua


That's the story story of a game-learning program that found the best way to "not lose" at tetris was to pause right before a brick extended above the top of the level, and leave it paused indefinitely.


I don't know if that's the best illustration - it seems like he forgot to include forward motion in his fitness function.


Yes...it's not that hard. There are literally dozens of genetic algorithm simulators available to run in your browser which do exactly that.


15 years ago it was rather hard....


I mean - not that hard to reason about. And it wasn't rather hard 15 years ago either.


Funny anecdote, but "artificial intelligence" (which people misuse as a fancy term when what they really mean is task optimization) requires setting the right goals. You know, like moving from point A to point B.


Yes, seems like more anecdote than reality. Reality is, you wouldnt just 'code an AI', and leave it running for the weekend, and then act all surprised when it has bugs in. You'd work your way eg through Sutton's tasks, like drive a car up a hill https://en.wikipedia.org/wiki/Mountain_Car , try not to fall off a cliff, http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node65.html balance an arm http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node110.htm... , and so on. And since these mostly learn really quickly, and ones initial implementation will be buggy, you wouldnt go away for a weekend and leave it running, youd just sit there running it for a minute or two, fixing bugs, running again, and so on.


It is actually possible to do this (the gaits are discovered not programmed):

https://www.youtube.com/watch?v=yci5FuI1ovk


Wow, that's a great video.

The generation-comparison section (0:55) and the outtakes (4:50) were my favorites.


Reaching zen in a weekend. Over archiever.


Hmm.... I now want to see if I can train a neural net to play QWOP.


That would be incredible, and incredibly popular


Fortunately, this is not an issue in reinforcement learning any more. It's quite simple to train humanoids in simulators to walk. But that knowledge is completely untransferable to the real world.


I don't know if is because the author is trying to be accessible, or genuinely doesn't understand AI but the writing eregiously understates the difficulty of human level machines.

If I had to summarize what the article said: "Google tried these robots but are selling them because it turns out to be hard to make robots that are as capable as humans." Which is like saying "it turns out that making humans immortal is really hard."

I doubt that is why Google is selling BD. My guess is that it was a combination of things: BD was a cost center with no commercialization roadmap, someone on the board got spooked about stupid AI risks, with the alphago wins it might start looking too scary for the public.

I think it was a terrible idea for Google, but is great for the robotics world as the behemoth is scaling back totally taking over the world.


Hi there, author here -- mostly trying to be accessible. And I think in the tech community it is well understood that robotics is hugely difficult, but it seems like general public pretty much equates progress in software AI with progress in robots, which is clearly not the case. We thought it might be helpful to highlight this to people who aren't hugely technical.


Thanks and it makes good sense in that context.

That said, I do fundamentally believe that there are some things that the "general public" will never wrap their heads around. Feynman had a great take on this when trying to explain why he can't just describe magnetic forces [1]. I'd be curious to hear your take on that.

[1] https://www.youtube.com/watch?v=qjmtJpzoW0o


I'm an optimist in this area - people like Wait But Why and XKCD (eg - https://xkcd.com/thing-explainer/) - show that most complex subjects can be explained in simple language that most people with an average education can understand. (An aside: I recently wrote an article about the use of D-Wave quantum computers on Wall Street and that experience gave me a sense of how tremendously difficult it can be to simple summarize an inherently technical topic. I estimate it took about five hours of study to be able to write a couple of accurate sentences I felt comfortable with.) Whether the general public has the inclination to take the time to understand this stuff is another question entirely!


I think Google is selling BD (and is putting all of the BD propaganda out) because the BD robots are 1) fairly useless outside of military applications and 2) BD is a ghost town because Andy left and took a lot of the brainpower with him.

For Google, selling BD now is probably the last possible time BD will be worth money.


They also admitted autonomous cars are going to take much longer than anticipated. Up to 30 years.

Perhaps the car situation and this are forcing them to rethink some of their plans.


Obviously. People who think general purpose self driving cars are just a few years out ignore the basic tenant of engineering: the last 10% is 90% of the work.

And with stuff like supersonic planes or Mars travel, the last 10% may take an indefinite amount of time.


I'm pretty optimistic that supersonic planes are possible.


Engineering has tenants? That I did not know.


That's twisting what they said. Google (and everyone else working on self-driving) is still targeting a 2020 timeframe for autonomous vehicles to be ready for the public [1]. However, the first cars that hit the road will likely be somewhat limited to certain geographies and weather conditions until all the corner cases are figured out and the sensor tech gets a bit better.

I think it's more likely that BD's robotics plans just don't mesh with Google's goals since BD is mostly focused on military tech, and Google is not. It just doesn't fit in well with the kinds of things X and the other Alphabet hardware teams are working on.

[1] http://www.ibtimes.com/google-inc-says-self-driving-car-will...


That older link looks outdated based on this newer information.

Google admitted now that it will take 5-30 years to achieve the full goals here. Also, that this technology will show up in pieces - cars will become more and more autonomous. And that makes sense, we already have cars from multiple manufacturers that can drive autonomously on highways, for example.

The new information Google provided is that the full "no gas pedal, no brake pedal, can drive anywhere" dream of a fully autonomous car will take up to 30 years to arrive.


> The new information Google provided is that the full "no gas pedal, no brake pedal, can drive anywhere" dream of a fully autonomous car will take up to 30 years to arrive.

The thing is, it doesn't really matter what Google thinks. There are six billion people on the planet, and it only takes one governor or president to decide that some local company's autonomous cars are "good enough" to serve as driverless cabs and they decide they want to reap the economic benefit of being a first mover and all of the sudden it's a geopolitical arms race.

We're already past the point where autonomous cars can drive many routes more safely than a typical human. That's just a fact of the universe. You can speculate that the entire world's political forces will form a perfect coalition to prevent the removal of steering wheels from those cars for 30 years, but I don't think that's how politics works.


What economic benefits? Seriously, I don't see any. I mean, I see some for the producers of said cars, but not for a city that mandates only self-driving taxis.


Fewer workers dying. Death is reverse immigration. Also getting two additional hours of productivity for commuters.


A former google AI researcher speculated on my facebook feed that google might want to use their Deepmind tech across all their AI ventures, and that Boston Dynamics, having taken a very different path, might not be in an easy position to switch over.


I always assumed practical non-trundling robotics would have a "Don't fall over" layer, a "walk this way" layer, and a "Let's see what's over there" layer - much like the layers in a human brain.

Deepmind would be good for strategy and goal setting, but maybe not so great at the millisecond-to-millisecond control needed to not fall over - which is the layer BD seem to have solved.

So I would have expected a natural synergy.


That's about right. Getting through the next few seconds in life is mostly about not falling over and not bumping into stuff. Once you have that, you can back-seat drive it with goal oriented systems.

On the other hand, using machine learning to tune your lower level control loops is extremely useful. (I used to think this was a path to AI, and went on a long detour through adaptive feedforward control and system identification. But in the end, machine learning did more for control theory than control theory did for machine learning.)


I can't find a source for this, where did they say that?


http://spectrum.ieee.org/cars-that-think/transportation/self...

What they said is that it may be restricted to certain scenarios initially. A go anywhere self driving car may take longer.


A lot of people here jump on the robo-taxi idea whenever the topic of autonomous vehicles come up. I think it's fair to say that most of the people deeply familiar with the issues peg that as being decades out. On the other hand, we seem to be very close to very good assistive driving systems and pretty close to (at least the potential to) turn over control to those systems under at least certain types of highway driving.

That's actually a very desirable thing given the number of highway accidents caused by driver fatigue and inattentively plowing into the car ahead. It just doesn't lead to all the visions of cars-on-demand etc. that get people all excited.


Not so. I work in the industry and most people agree that first-gen robotaxis will hit the streets of LA and SV around 2020. Obviously these locations are ideal for the use case since they have nice weather, flat terrain, and generally decent roads. It will be years later that a robotaxi picks you up on a snowy street in Manhattan, but this is a situation where incremental launches and improvements make sense - some markets are easier than others, so you can launch to those markets where autonomous is good enough.


As someone who works in the industry, can you explain the good reasons for choosing 2020 as the estimate?

(As opposed to this: https://xkcd.com/678/ or just hopeful thinking)


As someone who works in a big company, for some reason every long-term plan is touted this-2020 and that-2020. It's just easier to sell as a slogan and may influence real deadlines. That is until probably 2018, when everything will get named those-2025.


Five years out is roughly the confluence between "close enough to be interesting" with "of course, it's not quite ready yet."


That would work out nice for the current generation of professional drivers and there will be enough warning time for the future drivers so they can be OK when the moment comes.


I think Google simply concluded that BD's hydraulics-based approach was a technological dead end, and that there was little to be gained from trying to retool the existing, highly-specialized team. And once the DoD said they wouldn't use it, there was no customer on the horizon to offset the ongoing cost. So there was no short or long term reason to keep it.


DARPA Robots are full 2.1D autonomous. Cars have far less degrees of liberty and their "environment" is heavily subsidized.

Why are drones (3D evolution) and electric train (1.xD evolution) easy to build?

Well in the case of drones makers don't care about limiting the movements so rules of feedbacks are easy. In case of train, it is even easier.

It is all about the size of the decision tree and the number of input(sensors)/output(effectors) that are coupled you need to control. There probably is a metric to give you the domain of "accessible" low hanging fruits of automation that can be set according to the domain.

General purpose automates are at best expensive, at worst a scam (see the mechanical Türk).

One way to make bots efficient is to specialize them. Hence the Jacquart mecanic computer that created the industrial revolution of 1830 and set the workers on fights and created the conditions for WWI.

I guess no one saw the problem of efficiency still exists even with infinite R&D budget.

The problem of robots is by requiring quite a lot of investment for their deployment they set an unfair competition between people being backed up by capital and innovative self made man without capital.

That was the reason to be of the Luddites.


I haven't bought completely into all the AI hype, so take this with a grain of salt, but I feel like on a scale of 1-10, with 10 being human, we are at a 3 or 4. Boston Dynamics is trying to do things at level 8 with tools still stuck at the lower levels of AI. If there is a massive economic opportunity for AI tech that's available now, it's figuring out how to get it out of SV and making it useful for small businesses across the country. There are literally thousands of tasks that can be done faster and better using AI, but the problem is to use AI, you need someone skilled in basic programming to spend a week teaching themselves the basics before they can even begin to think about solving real business problems. Simplifying AI tools to the point where someone with basic excel/web browser skills can learn and apply this tech to their business operations in a few hours' time will be key to unlocking this opportunity.


I'm sure "machine learning for MBAs" will be a thing very soon if the current trajectory holds.

Using a scale of 1-10 to rate the "human-ness" of AI belies the complexity of the situation. Computers are already better than humans at a huge number of tasks. In other areas, they haven't reached the level of an infant, or even a mouse.


Already a thing :) Actually, Chicago Booth's MBA program has always been very heavy on statistics and now they have an ML class as well.


I'm not talking about teaching ML or AI to MBA's, I'm talking about making the technology easy enough to use that it is accessible to practically anyone. ML can have lots of benefits to many small businesses where parts of their operations are highly repeatable, but variable enough that they can't necessarily be automated away. Figuring out how to use ML in cases like this can improve quality control, efficiency, and a host of other factors, but owners of many small businesses don't have the technical skills or time to invest in learning AI. What they do have is a few thousand dollars to invest in something that could make their business better, but there needs to be a Wordpress or Excel of AI to bring the technology to them.


I'm curious. Would you explain why we are at 3-4? and using that scale how much more we need for a commercially available autonomous car, and why?


Well a car operates in only two dimensions and has one given goal of transport from point A to B, so a car would maybe be a 5 or a 6. Keep in mind this scale is how close we are to actual human simulation. Humans brains deal with setting their own goals, dealing with conflicting information in pursuit of those goals, making trade offs, a constant influx of new information that affects variables in the decision making process, etc, all of which have yet to be adequately solved by AI. As the article points out, there are lots of mechanical/hardware complexities that further complicate the software with human robotics, and don't forget the average worker has 30-40 years of life experience (i.e. Learning) that a robot would need to somehow replicate. I'm not saying these problems won't eventually be solved, but AI tech has some tools that are really useful right now, but most people don't know how to use them. Instead of trying to skip up the ladder from step 4 to step 8, let's figure out ways to introduce step 4 to more people, and then we'll have more brains trying to solve the problems at steps 5, 6, and 7 to help us get to 8. Everyone will also make more money along the way :-)


I have a hard time believing that Google sold Boston Dynamics because they didn't want to be associated with military applications or with "evil" AI. Boston Dynamics had deep relationships with the government and clear military applications when Google decided to acquire them. I don't see how the PR situation would have changed between now and then.

This seems like a case of a merger/acquisition that just didn't work out. Hopefully it's for the best. The military robots that BD makes have the potential to save a lot of lives and do good, in combat and non-combat situations. Ultimately, Google/Alphabet is an advertising company. Maybe it's better for Boston Dynamics to go their own way. They can still maintain a relationship with Google and other companies to share AI knowledge and technology.


> The military robots that BD makes have the potential to save a lot of lives and do good, in combat and non-combat situations.

This statement somehow felt more scary than all the terminator jokes...


"Google’s decision to try to shed its Boston Dynamics robotics group highlights a fundamental research problem: software is far easier to develop and test than hardware. "

This is an interesting statement, because I assumed that it was the software that ran the robots that was hard to get right.


I could see a market for a Google Hydraulic Robodog that follows me when jogging, fending off and biting aggressive real dogs until I call it with "Ok Google, don't be evil!".


Except you would have to scream "OK GOOGLE" at it three or four times before it realized you were speaking. Then it would misinterpret "don't be evil" as "donut, be evil" after which it would kindly grab some donuts for you and then enslave humanity.


Yeah, but would there be a market for it when the robots cost as much as a car?


Having a self driving car follow me to fend off the dogs doesn't work on my running path though.


From the article:

"But Boston Dynamics’s creations were not quite as advanced as people assumed. The main problem the company had solved was getting its machines to move in a realistic manner, said a person familiar with the company’s technology, but full autonomy is far away. Marc Raibert, the founder of Boston Dynamics, said as much in an interview with IEEE Spectrum in February, when he acknowledged that in the videos, a human steered the robot via radio during its outside strolls."

So basically what they created was fancy R/C toys, not real autonomous and potentially beneficial robots that could do real work? Creating realistic movement is a hard problem, and the associated electronics and algorithms to move from point A to point B, but it's hardly novel.

Maybe this coupled with the fact a few others have mentioned the company was poor at working with other divisions is the reason why they want to sell it?


Have you seen the robots they developed? Reducing it to "moving from point A to B" is misleading, in my opinion, even if technically true. They essentially developed machines that could move in difficult terrains, often where humans themselves couldn't.

My favorite is the Sand Flea, a small car which can jump 9 meters (30-feet) into the air: https://www.youtube.com/watch?v=6b4ZZQkcNEo


"X is hard" is the new Considered Harmful. The newest thought-terminating cliché.


Not using clichés is hard.


Also, just as when BMW bought Landrover, the sold it two years later. Google has learned what they needed to learn from Boston Dynamics, and I assuming had a chance to syphon off any talent they wanted too. Time to move on with the key assets.


Similar how Google did with Motorola Mobility.


"...but full autonomy is far away. Marc Raibert, the founder of Boston Dynamics, said as much in an interview with IEEE Spectrum in February, when he acknowledged that in the videos, a human steered the robot via radio during its outside strolls."

This seems kinda sketchy to me. It's possible we'll fall into a Robotics Winter, similar to the AI Winter due to these over-inflated claims of advancement.


I looked up the quote: "Raibert said that for the outdoor scenes, a human provides general steering via radio while the robot uses its stereo and LIDAR sensors to adjust to terrain variations. ATLAS also does its own balance and motion control. "

This kind of navigation level steering is an easy problem compared to stable walking over rough terrain and recovery from kicks and shoves.


Maybe, but the videos (and surrounding media) were presented in a way which implied the robot was autonomous, finding it's own path through the terrain.

It may not be intentionally misleading, but still misleading.


Compare it to that Watson/Carrie Fischer commercial, where Watson is presented as being able to carry on a conversation. It can't really do that, but that's also misleading.


I wonder what Google's thinking was when they bought Boston Dynamics if they are now selling them because of the long timeframe. Trying to accelerate the timeframe, helping to make the robots autonomous using their other AI groups, and being ready to launch when the technology is ready are the reasons I thought they made this acquisition.


They may just have wanted to get a good clear look at the state of their technology and then having done so, decided that it wasn't interesting.

They could also be keeping the interesting bits for themselves.


The mention of groups of robots learning together by exposing themselves to the real world and sharing lessons is very interesting to consider. I wonder if this is inspired by ants searching for food and leaving scent trails. Are there success stories of this approach in robotics already?


The comment "the division’s leader, Jonathan Rosenberg, said the company needed 'to have a debate on hydraulics.'" is significant. Google also owns Schaft, which has an all-electric humanoid with water-cooled motors that are heavily overdriven for brief periods using ultracapacitors to get a power boost. Boston Dynamics' robots are hydraulic, using proportional valves controlled by high speed servoloops. This works, but it has lots of disadvantages. The energy efficiency is poor when you don't need full power. There's no energy recovery. The system is bulky for a humanoid. For the pony-sized BigDog, it made sense. Early industrial robots were hydraulic, but that's now rarely seen except in very large robots. Electric motors and their controls have improved enormously in recent years.

I was expecting, after the Google acquisition, to see a new humanoid robot about now with Schaft's drive system, Boston Dynamics' balance system, and Google's image understanding system. That was the good outcome. Apparently Boston Dynamics does not play well with others, and that didn't happen.

Notice that Google isn't selling Schaft. I hope that they're doing OK. They're people from Tokyo University and from Honda's ASIMO project who felt things were moving too slowly. But they're in Tokyo, and Google may have problems managing remote teams.

As for "reality is hard", a good humanoid robot is mechanically at least as complicated as a car. Look how much engineering effort it took to develop good cars. Today, small teams can build a car, but that's because the problem and technology are well understood and you can buy many parts off the shelf.

Google has no track record in hardware with moving parts. Their autonomous vehicles have great software, but the hardware is purchased and bolted on. (Really bolted on; they do not bother to integrate the sensors into the vehicle shell, unlike every auto manufacturer that's done self-driving.) They're still using those rotating Velodyne scanners, which are a mechanical system that should have been replaced years ago. Flash LIDARs and MEMS LIDARs exist. Even the Google StreetView cars look clunky, and their backpack StreetView thing needs a redesign from GoPro.

I can see the cultural problems between Google, with no track record in mechanical engineering and a very young workforce, and Boston Dynamics, with good mechanical engineers and a 67 year old CEO. On the other hand, Google should not have bought all those robotics companies and expected them to make money Real Soon Now. Look at automatic driving. It's been 11 years since the DARPA Grand Challenge, when we first saw that it could really work. Nobody has a production vehicle on the road yet. It's a long haul with a big payoff. This isn't like the ad business.

If anybody from Google is reading this: you still have Schaft. Don't fuck that up. Thank you.


So it's even more far-fetched than a "Moonshoot" project? Will Google even get to keep the technology it created so far, or will it all be sold as IP and they won't be able to re-use any of it for other projects?


It could be that the standard for moonshots changed with the formation of Alphabet. That's the impression I got from yesterday's press coverage.


So Google quits because the problem is too hard? Clearly this is not the reason.


Google doesn't deserve this kind of deference. As a corporation, they will abandon a problem if predicted expenditures outweigh predicted returns.


> The main problem the company had solved was getting its machines to move in a realistic manner

Why? Who cares about the aesthetics? The only thing that matters is if they can get the job done while being safe.


Style of movement is functional as well as aesthetic. If the robots are to work around or with people their movements are important. People are very good at predicting and understanding each other's behaviour, which is useful for safety and coordination. This may be secondary to just getting around but it's a serious consideration and area of research in interactive robots.


If the robots are to work around or with people their movements are important. People are very good at predicting and understanding each other's behaviour, which is useful for safety and coordination. This may be secondary to just getting around but it's a serious consideration and area of research in interactive robots.


Dupe, sorry. Delete timed out.


I think it's a shrewd logical decision. Google wouldn't sell robots just like they won't sell cars, because licensing the software is more lucrative. Same old MS-IBM story really.


I tweeted Elon Musk asking if he wants to buy Boston Dynamics.


Watch out guys, we've got a stalking horse over here.


Weird title. Should be "Why google decided not to sell robots" rather than "Why google wants to sell its robots."


> "In order to make AI work in the real world and handle all the diversity and complexity of realistic environments, we will need to think about how to get robots to learn continuously and for a long time, perhaps in cooperation with other robots," said Levine.

SkyNet is coming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: