I saw a great talk today from Nisar Ahmed as part of the weekly MAE colloquium. He gave an overview of his research that focuses on fusing information from humans and robots. It’s extremely cool – providing robust frameworks for humans to use their insight or pattern recognition and tell the robot “hey, you should probably look over there” without directly controlling the robot. This ideally combines the best parts of both human and robotic perception.
The talk piqued my interest because it is directly related to two big topics I find fascinating at the moment:
- Figuring out and taking advantage (specialization) of the differences between what humans are good at vs. what computers are good at.
- (kind of a subset of the first) How computers are really good at what they do for the 99.9% of the time that operating conditions are ‘normal’ and terrible in the 0.1% of the time that all assumptions are off.
#1 is the underlying motivation for this whole branch of research – because robots are great at sensing the world, but in completely different ways than humans are, by combining the perception of the two, you can get a much more accurate picture of the world.
However, I couldn’t help but focus on #2 throughout the talk. Dr. Ahmed touched on it when he told the story of how Neil Armstrong needed to take manual control of the lunar lander immediately in the last 30 seconds because the landing computer was aiming them towards a boulder. The point is that many computer/robot failures (sensory or otherwise) occur not when it does its job poorly, but when something COMPLETELY DIFFERENT happens.
Currently, in human-robot collaboration, the robot essentially treats the human as an additional sensor, so human’s ability to inform the robot is still limited by the foresight of the programmers, and we can’t leverage the human ability to abstractly deal with situations from way out in left field. When I asked him about this, Dr. Ahmed did note a couple of different methods that could potentially allow robots to benefit from human intuition in extreme circumstances, but they are all computationally expensive. This ironically makes those methods useless for the exact situations you would want them... for now.
I’ll do more musing while I wait for Moore’s Law to do its magic.
If you enjoyed this post, please share!