Authors: Grzegorz Jóźków, Charles Toth, Zoltan Koppanyi, Dorota Grejner-Brzezinska
Small-size and low-cost imaging sensors, which are widely used in a multitude of consumer devices, provide medium-quality, typically redundant data that has the potential to be used for mapping and navigation purposes. For example, the Microsoft Kinect™ contains both passive and active imaging sensors, significantly extending the range of possibilities in its application. This study is focused on the indoor mobile use of the Kinect™ sensor for mapping and navigation. Mobile mapping of the indoor environment in 3D using Kinect™ may be realized by stitching the point cloud series acquired during movement and then reconstructing the navigation trajectory. In this study, a combined point cloud registration method is proposed that is based on 3D transformation of consecutive point clouds with transformation parameters estimated on the basis of matching 3D points. Since the modest geometrical quality of Kinect™ point clouds causes difficulties for finding corresponding 3D points, the matching was primarily performed on RGB images (2D) by applying the SIFT method. Knowing the relative orientation between Kinect™ passive and active sensors, each SIFT key-point can be associated with points in the 3D depth image and, consequently, with a 3D position. Results of several tests in a typical indoor corridor environment showed that the proposed approach for stitching point clouds allows for robust reconstruction of the trajectory and, in general, easy combination of numbers of Kinect™ image frames. Similar to other navigation systems based on image sensors alone, the Kinect™ image data also has some limitations, which are discussed in this paper.
You are reviewing the
Download.
Your machine is locate at 3.141.192.115.