Add to collection
  • + Create new collection
  • Rights: University of Waikato. All Rights Reserved.
    Published 14 March 2012 Referencing Hub media
    Download

    Using time-of-flight 3D cameras for measurement applications has its operational problems. University of Waikato scientist Dr Adrian Dorrington explains what these are and how they can be remedied. For example, a solution to interference of light sent out by the camera has been solved by encoding the light. The reflected light received by the camera can be identified from this encoding.

    Transcript

    DR ADRIAN DORRINGTON
    The limitations and the problems with the current cameras depend a little bit on what application you’re using them for, and some of the things that we’re working on, for example, are power-saving techniques. If you want to use a camera in a mobile application, for example, you’d like the battery to last as long as possible. Some of the other problems are accuracy – if you want to use a camera in a situation that requires very accurate or precise measurements, then we have some techniques that can try and improve the accuracy.

    The accuracy is impacted in a couple of different ways. One way is that the camera doesn’t necessarily measure distance correctly. If we try to compare what an actual distance is to what distance the camera measured, we’d like to see that as a straight line, that is, if the actual distance is 3 metres then the camera measures 3 metres. But the camera doesn’t really do that. If we plot this actual versus measured distance, we actually see a wavy line indicating that the camera may overestimate or underestimate the distance that we’re measuring, so some of the techniques that we have is to try and reduce that over or underestimation, to try and bring the measurement closer to the actual distance.

    Another thing that can impact the accuracy is multi-path interference. When we send out light, we want it to go directly to the target that we’re measuring and come directly back to the camera, but because we’re sending light out to an entire scene at the same time, if we have a bright object somewhere, we might get some light that’s bouncing off that object onto the object we’re trying to measure and then back to the camera. So now we have two competing paths – a direct path and this interfering multi-path. So we have some techniques that can try and separate out those two paths to give us that true path and to try and discard that interference.

    Time-of-flight technology, what we do is we send out some light from the camera and that light illuminates the entire scene and is scattered off of objects, and some of that light returns to the camera. We encode the light that we’re sending out, so when it returns to the camera, we can look for that encoding pattern, and we can measure how long it has taken for the light to get from the camera out to the object and back to the camera again. And once we’ve measured that time, we know the speed of light, so then we can calculate that round trip distance.

        Go to full glossary
        Download all