Dr Brooks, of MIT Robotics fame, came up with a really elegant solution for the complexity of robots, he called it subsumption. As an architecture it worked by allowing all of the small bits of a robot to be independently functional but merely "influenced" by a higher authority. That worked through a series of behaviors that were built in that could be overridden by external input.
Consider the case of a 'leg'. A robot needs to lift its legs and move them forward in order to achieve forward locomotion. This means it needs a gait where it steps while keeping itself balanced. Brooks created a system where the highest ranked function that could run, would. So if the robot was stable, it unlocked the 'lift function' and if the leg was lifted it unlocked the higher priority forward function, and if the leg was forward it unlocked the higher priority balance function. I really liked it a lot and have used it in other embedded systems since then.
As a result I see the "smart" house system similarly, which is that I would build systems that were self contained, and could see input from outside. Take a light for example, it would have 'on', 'off', 'dim' and 'i don't care'. If you command it to any point, it goes there. Period. If it is in 'i don't care' the house controller is free to tell it to do something different.
Back then, I said that we needed more structure than Brooks, but less structure than Latoumbe (a professor at Stanford into efficient exhaustive search planning). Before Brooks, there was a long tradition of detailed planning in AI. Brooks totally rejected planning. This is an extreme position, but the planning and re-planning people were having trouble coping with the real world.
Brooks' Roomba is an expression of that model. The Roomba has a few simple behaviors - go in straight line, spiral, follow wall - and a state machine to switch between them. It has no idea where it is. Most later robot vacuums have actual navigation, because Roombas are notorious for getting themselves into the same traps over and over. Both overplanned and purely reactive systems have trouble coping with the real world, but they fail in different ways.
> This is an extreme position, but the planning and re-planning people were having trouble coping with the real world.
Yes. The whole field wasted decades running afoul of Ron's First Law: all extreme positions are wrong. This is why the three-layer architecture described in the link above (reactive behaviors sequenced by a state machine informed by a deliberative planner) eventually became the de facto standard, because it combines both approaches in a way that actually fits the structure of the problem.
I think the issue with subsumption is that 3layers aren't enough. It would be interesting to combine recent deep neuralnet architectures with the Dr Brooks overall subsumption design (so multiple layers of individual deepnets layered hierarchically)
No, the problem with subsumption is that it is a poor composition mechanism. Even the name reveals this to be true: a higher layer has to subsume a lower layer, because (according to Brooks) when a higher layer is active the lower layer's outputs are suppressed. As long as the higher layer is active, the lower layer contributes nothing to the final result and hence may as well not exist. In fact, it's really not a "layered" architecture at all, it's just a big cascading series of if-then-else clauses. Subsumption doesn't really do anything other than obscure this fact.
I think that is overly harsh, the beauty of it for me continues to be that it is default stable. No input and things are stable. That makes it very resilient in low compute environments. That is the same reason I would recommend it for a smart house architecture, when the upper (planning) layer fails (or is broken) you can always get the lower layers to do what you want (lights turn on, or thermostat adjusts) because the local command the top of the 'if-then' tree in your lexicon.
But it isn't. In order for two layers to interact properly you have to specifically engineer them that way. If a higher layer starts or stops subsuming at the wrong time you can get catastrophic failures.
Also, smart houses are a very different problem than autonomous mobile robots.
That was the rhetoric. But the reality was that it was just a big kludge. That is why subsumption-based robots can't do anything more today than they could 30 years ago. Subsumption pretty much tops out at wandering randomly without bumping in to things. But that turns out to just not be a particularly hard problem.
1998 (though it is remarkable how difficult it is to find that date), so very nearly 20 years. But it's a survey; the work described in that paper goes back as far as 1990 or so. (I'm the author, in case you didn't know.)
I once attended a talk by a man who developed software for autonomous submarines that balanced immediate short term needs analogous to breathing with higher order missions like searching for mines. It was called hafscript (hierarchical autonomous framework). The author still seems to be active online.
This is also how biology works. The neural circuitry for walking is a fairly short loop between your leg muscles, spinal cord , and nerves handling proprioception, with some input from the brain for balance control, and the more intentional behaviors are layered on top (start, turn, stop) by the conscious systems.
Consider the case of a 'leg'. A robot needs to lift its legs and move them forward in order to achieve forward locomotion. This means it needs a gait where it steps while keeping itself balanced. Brooks created a system where the highest ranked function that could run, would. So if the robot was stable, it unlocked the 'lift function' and if the leg was lifted it unlocked the higher priority forward function, and if the leg was forward it unlocked the higher priority balance function. I really liked it a lot and have used it in other embedded systems since then.
As a result I see the "smart" house system similarly, which is that I would build systems that were self contained, and could see input from outside. Take a light for example, it would have 'on', 'off', 'dim' and 'i don't care'. If you command it to any point, it goes there. Period. If it is in 'i don't care' the house controller is free to tell it to do something different.