"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," says Achuta Kadambi. "That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3-D models of translucent or near-transparent objects." Changing environmental conditions, semitransparent surfaces, edges, or motion all create multiple reflections that mix with the original signal and return to the camera, making it difficult to determine which is the correct measurement.
The new ToF camera uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled. "We use a new method that allows us to encode information in time," Ramesh Raskar says. "So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."
"By solving the multipath problem, essentially just by changing the code, we are able to unmix the light paths and therefore visualize light moving across the scene," Kadambi says.
A Youtube video demo visualizes the light travel, including multi-path resolving on semi-transparent objects:
Key components: PMD 19k3 sensor, FPGA Dev Kit, Custom PCB for light sources, and DSLR lens from a regular Canon SLR. |
MIT Presents Coded ToF Camera
Rating: 100% based on 975 ratings. 91 user reviews.
Rating: 100% based on 975 ratings. 91 user reviews.
Tidak ada komentar:
Posting Komentar