I was put on the idea of using the Kinect sensor for Microsofts XBox for small-scale geomorphology through the work of Ken Mankoff (http://kenmankoff.com/). The Kinect works by sending out a structured pattern of beams of infra-red (IR) radiation, sort of like a finger print pattern, and when it is fixed in your living room it can ‘see’ you and tell how far away you are and how you are moving as a result of the way that your body distorts the structure of the IR pattern. A number of open and closed source software projects have been developed to effectively hack the sensor so that it can do this processing the other way round: instead of the Kinect sitting still and you dancing around in front of it, you can move the Kinect around a fixed object and the software will pick out easily identifiable points on the fixed object and because it knows how far it is from the fixed object it can generate a 3D model of the object. In this way the Kinect can produce data that is similar in nature to that from a terrestrial laser scanner, although over a much smaller range.
I have been using the ReconstructMe software to capture the Kinect data, which has a fee, but there is also Skanect, Scenect, and open source PCL software and many others. Its really a growing field and there are many new tools becoming available all the time such as this one being developed for an ipad (www.structure.io) which was funded by Kickstarter, so I think the possibilities for using this type of technology will look very different in as little as 6 months.
The Kinect is well suited to capturing the complex topography of a penitente field, where you really need to be able to move your sensor all around the feature and into the troughs within the features. Here is a snippet of Ben Partan testing our Kinect on a snow patch while I monitor the mesh being captured by ReconstructMe:
There are a few challenges we have encountered in our work:
1) Because the Kinect is designed to work in your living room it only has a short range – up to the size of your living room, so only a few metres. This is made worse working over snow and ice as they absorb almost all the infrared emitted by the Kinect, and the high reflectance of snow and ice to solar radiation swamps the signal. We found on our first trial that the sensor works on snow and ice as long as the sun is below the horizon and there are no high clouds diffusing the light, but over the glacier we ended up working after dark.
2) We also found that to run the software we are using to collect data from the Kinect sensor we need to use a powerful laptop, with a fast GPU so that we can see the data collection in real time and observe if we have left any gaps in the surface. This basically means that while the processor on your desktop will be good enough to do the data processing in real time, if you need to use a laptop as we do on the glacier you need a gaming laptop. An implication of this is that the GPU of a gaming laptop uses a lot of power so we need to couple it to 2 external 12v batteries wired in parallel so we have 20Ah of power feeding a transformer to power the laptop – these batteries are quite heavy! The Kinect itself uses very little power, so this may be a problem that is restricted to our power hungry MSI GE60 laptop.
The Kinect can be used to make small scale surface scans, so anything that you want a centimeter scale surface of can be scanned. In our case we want to the shape of the surface of a small 2m x 5m area of the surface of Tapado glacier, and then we scan this same surface every 2 weeks in order to see how and where it has changed. By subtracting the two surfaces we can compute how much ice volume has disappeared in 2 weeks and we can compare that with the amount that is predicted by models of glacier melting. This is important in terms of knowing how accurate the models that we use to predict glacier runoff and retreat are in this region.
There are also a number of open source tools for processing point cloud or mesh data, for example PCL, Meshlab, CloudCompare, Blender. Here is a first example of the final product of the scanned snow penitentes produced in Meshlab:
A second hand Kinect can be as cheap as USD50 and as there is open source software there is plenty of scope to go and make similar science measurements yourself. An excellent starting point is Ken Mankoffs recent publication:
K. D. Mankoff and T. A. Russo. “The Kinect: A low-cost, high-resolution, short-range, 3D camera”. Earth Surface Processes and Landforms 38.9 (2013), pp. 926–936. doi: 10.1002/esp.3332