Rights: The University of Waikato Published 2 September 2010 Download

Professor Dale Carnegie, Deputy Head of the School of Engineering and Computer Science at Victoria University, leads a research project that is developing advanced robots that can learn and adapt. They study how the human brain learns and use this information to design the intelligence of robots.

Point of interest

Why do we want to design robots that can ‘learn’?


We've really tried to make it operate in a similar way that the human brain works, so we have people looking at how the human brain works, how the neurons fire, and they have found that patterns are formed inside the brain.

We are trying to do exactly the same thing with computers, so when a robot learns something new, it’s making a new pattern that resides in an electronic equivalent of our brain. And the next time that it comes across a situation like that, it has the patterns which says, “I've been here before, I know what to do.”

We are limited by how powerful the computers are. If we have a look, our computers currently have about the processing speed and about the memory of a lizard. It doesn't mean our robots are stupid – our robots can do the task we set for them very, very well – but if you take them outside that task, they are lost.

The predictions are, with current computing power, around about 2060, our computers will have the same power that the human brain does. So around about then, we will be looking to see whether our robots can really function as well as humans can.

So our robots have to learn, and they have to be able to adapt by themselves without a human programming it. If the robot’s in an environment where it’s suffering a lot of collisions, it’s really impacting a lot and it doesn't want to do that. It will learn to avoid those areas and find another path.

Our advanced robots all learn from their mistakes. We can't programme a robot to do everything. We now try to just show it the basics, and then the robot tries to learn the rest itself. And we will tell the robot if it’s doing a good job or not. So if we've given a robot a job to do, unless we say, “Yes, that’s good” or “No, that’s bad”, the robot won't learn, so it won't know which patterns to try to improve. If the robot doesn't make mistakes, then it won't know to avoid doing that in the future, so the robot has to make mistakes.