University of Waikato scientist Dr Adrian Dorrington is investigating different applications of 3D cameras. The camera emits light, and the time of flight from camera to object and back again is measured. One application of this type of camera is in assessing the height of customers entering and leaving premises for security reasons.
DR ADRIAN DORRINGTON
My research into 3D cameras is cameras more for measurement applications than for viewing applications like 3D TV. So the way our cameras work is they measure distance and they do that for every pixel in the image simultaneously, so we get like a normal digital photograph out or a digital video, but in each pixel, we get brightness as you would normally get but we also get a distance value, so we can measure the shape or the size and the location of things in the scene.
The technology behind the cameras we’re working on is called time of flight. We send out some light, and we wait for that light to come back to the camera, measure how long it’s taken and then we can work out the distance.
Because the technology is fairly new, it has a bunch of limitations, and the quality of the images that we get back are not perhaps as good as we’d like them to be. So what we do in our research group is we develop ways to improve the quality of the data we get back from these cameras, so we’re working on techniques and algorithms that are applied inside the camera, and we work with these international manufacturers to do that.
Some of the things we do are to provide additional image processing techniques that can be applied after the images are taken to clean up the image or improve the quality, but some of the things we do modify the way in which the image is captured and capture it in a way that tries to remove some of the problems that are present in the current cameras.
So the cameras that we work on are used usually for machine vision applications, that is, we want to try and get a computer to perceive the world in the way that we do. A normal video camera or still camera doesn’t see depth, so if you imagine two people standing, one adult further away and a child closer, they may appear to be the same height in a photograph, and a computer couldn’t tell whether one person was taller than another person. But if we add the depth information we get with our time-of-flight cameras, now the computer can say, well this person is a lot further away so therefore they must be a lot taller.
And one application for that is perhaps a security application. If you walk into a service station, you might’ve noticed that there are coloured bars on the glass door. Those coloured bars are there to try and measure the height of people walking into the service station on the security cameras, but they only do that as the person passes the door. If the person is in a different place, then you can’t tell how high they are, but if we were to use a time-of-flight camera for this application, then now we can see how far the person is away from the camera, and we could measure their height anywhere within that scene.
The 3D cameras we work with are, in principle, very similar to any other camera. We have a sensor and then we have a lens that we need to collect the light and create an image on that sensor, and we need to read data out of that sensor into some sort of processing algorithm. One of the main differences is that our cameras are active, that is, we are always illuminating the scene, whereas a traditional camera, often you’re using natural light, and you’re just detecting what that light is. If you’re using a flash then you’ve got an active system, but we’re always sending out light to make that timing measurement.
Shell Service Station – Andersons Bay Road, Dunedin