Handheld 3D Scanning & Reconstruction
Overview
Building off a
previous project,
I created a handheld 3D scanner capable of camera localization, continuous point cloud fusion, and 3D reconstruction.
This pipeline was developed using ROS on a laptop then ported to a Jetson Nano and Jetson Xavier NX.
Currently, handheld 3D scanners sell for
$6000 - $20,000.
This is a $400 prototype demonstrating that DIY handheld 3D scanners are feasible with COTS components.
Note: All .stl files in this page are interative. Users can zoom and rotate 3D models.
If an any .stl file fails to load they can be downloaded
here.
Reconstruction Pipeline
Figure 1.Reconstruction pipeline flowchart. Point clouds are collected to form a map. Then reconstructed into an .stl file.
To scan entire objects, I needed to implement continuous point cloud fusion.
This requires finding the transformation between each point cloud.
Initially, I attempted this using iterative closest point.
This algorithm takes 2 point clouds and calculates the transformation for optimal alignment.
Using this transformation, I transformed the incoming point clouds and fused them in a point cloud map.
My implementation was inefficient.
As the map became larger, it was increasingly computationally intesive to compute the transformation and merge the point clouds.
This resulted in poor transformations calculations and a useless point cloud map.
Figure 2. Point cloud map generated from ICP. Map contains my desk, monitors and cabinets.
I decided to use an open-source solution. I utilized the RTAB-Map ROS package for visual SLAM.
The SLAM algorithm allowed me to localize the camera within the point cloud map.
This proved to be an efficient solution.
Figure 3. Point cloud map generated from visual SLAM. Map contains my entire bedroom. Map has accurate room dimensions.
Once I generated a point cloud map, I figured the project was mostly complete.
Using the Point Cloud Library (PCL), I estimated the surface normals and implemented the Greedy Projection Triangulation algorithm to generate a mesh.
My results were disappointing.
Figure 4.3D reconstruction of my work desk.
The point cloud map looked great, but I attemped countless different combinations of meshing parameters with no success.
This made me believe that the estimation of surface normals was the source of error.
I took a step back and implemented the moving least squares algorithm in PCL to smooth the point cloud map prior to estimating surface normals.
The resulting meshes showed significant improvement.
Figure 5.3D reconstruction of my work desk after smoothing point cloud data.
Porting to Microcontrollers
Streaming Point CloudsOnce the pipeline was functional on my computer, I began porting it to the Jetson Nano to make a handheld 3D scanner. Considering I was running multiple computationally intensive algorithms using large point clouds, I was hesistant to put the full pipeline on a microcontroller. I decided to stream incoming point clouds over a LAN to my laptop. Then, my laptop could perform of the intense computations. I used ethernet connections throughout the system to optimize the data transfer. With this setup, my laptop recieved point clouds at 0.1Hz. SLAM had no hope of localizing the camera at this frequency.
Full Pipeline on Jetson NanoWhen running everything on my laptop point clouds messages moved through my system at 20Hz and the camera was localized at 14Hz. The bottleneck of my point cloud streaming system was the ROS publisher/subsriber framework. When transfering point cloud data between machines, the large point cloud messages needs to be copied. When running everything on one machine, the pipeline can utilize ROS nodelets with shared memory. I decided to put the entire pipeline on the Nano. Point clouds moved through the pipeline at 16Hz and the camera was localized at 6Hz. This was enough for the entire pipeline to function, but there was still much room for improvement. In order to scan objects, I had to move the camera incredibly slow otherwise SLAM would be unable to localize the camera. Even if the algorithm regained localization of the camera, the error involved in this process would often ruin the point cloud map.
Upgrading to Jetson Xavier NXI decided to use a Jetson Xavier NX as a quick solution to the Nano localization problem. The NX has a significantly more powerful CPU compared the the Nano. Point clouds moved through the pipeline at 18Hz and the camera was localized at 10Hz. Given these frequencies, I expected the NX to produce more accurate meshes than the Nano.
Results
Jetson NanoDiscussion
The results do not perfectly align with my expectations. Specifically, I expected the .stl files generated from the NX to be a large improvement from the Nano. I believe I still can produce superior meshes from the NX, however I need to tune the parameters. The most time consuming aspect of this project was tuning all the parameters involved in the pipeline. Each new device requires new scanning parameters and new meshing parameters. When I first developed the pipeline, I tuned everything for my laptop. To optimize performance on the Nano, I clipped the distance of the RGB-D camera and slightly increased the point cloud voxelization. The changes propogated through the pipeline, so new meshing parameters were necessary.
Primary System Limitations:
- Difficulty tuning parameters for various devices
- Difficulty tuning parameters for target objects of various sizes and geometries
Future Work
An interesting extension to this project, would be to use machine learning to optimize the meshing parameters. In an ideal system the user would input their hardware specifications and target object specifications. These specifications would directly correlate to scanning parameters. Given scanning parameters and target object specifications, machine learning could be used to optimize the meshing parameters. This would address the primary limitations of my current system.