Electronic – Trouble deciding on how to build a LiDAR system for 2D and/or 3D mapping



Three of us are attempting to build a 2D LiDAR (Light Detection and Ranging) system to scan an indoor area or tunnel system. The data will be used to reconstruct a 2D image. If 2D is accomplished within our timeline we will proceed in designing a 3D system. Our end goal is to integrate it into an obstacle avoidance system but our timeline is too short to accomplish this so our focus will be on developing 2D/3D mapping system.



Which strategy in using the laser will be better in generating data to produce a reconstructed image "map". Such as….

Do I invert the laser beam to where it's a line that reflects off objects/corners then process the data through an edge-detection algorithm to reconstruct a map? Example .

Do I just use the single beam to scan multiple points using "time of flight" strategy to calculate distance to create a point cloud map. Example


3D LiDAR scanners are extremely expensive and the project requirements state 70% of the project needs to be built. In other words I cannot just go out and buy a scanning device.

So whatever strategy I use I have to be able to make multiple scans taking in a extremely large amount of data points. This would mean I would have to use a MCU that could extract and calculate all those individual scans and I just don't see that happening with a Arduino or even a FPGA. From previous sensor projects i've done i've always experienced delay even using XBEE's to transmit the data to my computer to be processed.


What MCU will be powerful enough to capture this data (RaspberryPi,BeagleBoard,Arduino,FPGA). Will I need to share the load and transmit the data to my computer for image reconstruction. If so what type of transmitter/receiver will do this without a huge delay.


2D and 3D reconstruction requires some heavy duty algorithms. Therefore, having a prebuilt library or software that has this built in would significantly save time.


Matlab seems to be used in reconstructing and mapping images through using edge-detection algorithms. There are several toolboxes Matlab sells that deal with image processing. However, I don't believe I could use Arduino,RaspberryPi,BeagleBoard,etc with Matlab unless I transmitted the data to my computer for processing inside Matlab.

OpenCV located HERE is a open source C++ library that is for "Computer Vision". It has a library full of functions for image processing. It is compadiable on Window,Linux,Unix systems also. However, I don't think I could use it with a LiDAR system. OpenCv is used for Stereo Vision. For more information take a look HERE. At the bottom of the link there are some great tutorials explaining how to use two webcams to acquire 2D and even 3D reconstructed images like HERE for example.


The overall scope of this project is to reconstruct a 2D "3D if time allows" reconstructed image "map" of the entire scan from start to finish. Even though the end goal would be to incorporate this into an obstacle avoidance system I have to consider a method that meets the requirements I mentioned earlier. Other methods I've considered are as follows…..

Sonar (Example)

Stereo Vision (Example)

3D Laser Mapping processing data into a "Point Cloud" (Example)

I'm not for sure what this video is demonstrating as in whether its using LiDAR or Sonar. But it's using SLAM "Simultaneous Localization and Mapping" alogrithms to generate a 3D map. This seems to be theclosest idea we are trying to simulate.(demo).


As you seen the sonar method looks very nice and would match the end goal of our project. However, Stereo Vision would allow me to use OpenCV which is free and open source on a platform like the RaspberryPi or BeagleBoard. I could technically use Matlab also with Stereo Vision. However, LiDAR seems very promising and something that would require more hardware design which would meet the requirements more.

Any guidance/information anyone has for me on where I could find more information or just advice on what direction I should take would be extremely helpful and appreciated. Our current funding for our project is at $3000. We are going to the approval board here in 2 weeks and everyone in the group is having a hard time deciding on what direction would be more approachable. Therefore, hearing some other input would be very useful at this point.

Best Answer

Start with the core feature, which is the laser and getting a useful result from it's output and some form of feedback. Usually an Avalance PIN diode. Have some design aspects such as required range, like 1-10 metres, or 3-30 metres etc. This will fundamentally affect the circuit design. The closer range and better result you need means your circuits will quickly reach RF signal frequencies for operating the laser. I have personally looked into this myself for mobile robotics projects of mine, and I will be going down the path of continuous wave phase shift (or frequency modulated ) style .

There are pros and cons to each method of laser operation, such as pulsed Time of Flight needs very fast clock speeds and has an annoying minimum range issue. The continuous wave method is far easier and rather than minimum range issues has long range overlap issues, which could be dealt with by building a layered laser system for different range resolutions.

There is a few research papers and other literature I have found during my research, and a good starting one on comparison of techniques is Nejad & Olyaee. 2006. Comparison of TOF, FMCW and Phase-Shift Laser Range-Finding Methods by Simulation and Measurement

The reason why I suggest phase shift modulation fused with some other logic/error checking mechanism is because the circuitry behind it is the simplest - a single clock generator and a PLL (or even an easily available RF modulator) to drive a laser, and then receive data through the PIN diode. This is then passed through an RF amplifier, and you can immediately use a clock divider to scale the received laser pulses down to a reasonable frequency that any cheapo frequency -> wavelength conversion IC/opamp can deal with. The wavelength of the transmitted signal tells you your range, and the received difference in wavelength tells you the distance to the target point.

The complexity of the circuitry required is not too difficult, but it IS a lot. The microcontroller you use should be a powerful [ARM] processor, with USB fast speed to dump the data to a computer for processing.

the last stage of your project after testing the operation of the single point scanner, is to make it sweep - either by using a servo, slip ring and rotating mount, or usually a spinning mirror and very accurate encoders (like what you see in a laser printer - the rotating metal hexagon. rip an old laser printer up, it's fun!)

Once you have a mirror spinning single-point scanner, in comparison to an existing LIDAR solution such as the Hokuyo lidars, or SICK, which scan at around 40Hz, and 1050 single points of data each cycle, meaning your own system to be comparable should be able to sample and buffer data at a whopping 40KHz.

Related Topic