There are two ways to determine the distance of an object:
TRIANGULATION: You measure the angle that the point you are measuring with two or more sensors; this is what Russell told you, and used with compass for example;
TRILATERATION: You measure the distance from the point to two or more sensors, without knowing the direction; this is the case of GPS tracking or ultrasonic sensors.
So basically, you can use lasers (or lights in general, that involves also image detection) with cameras to do triangulation, or range sensors like ultrasonic sensors (used in robotics too) and use trilateration. I think it depends by the properties of the objects you're measuring and also other things like precision, size and others.
If it can help I've seen that self-driving veichles like the ones that participate to DARPA Grand Challenge often use cameras, and since the distance is similar probably that's the best choice.
Using computer vision, a common approach is to project on the objects a pattern (there are studies about which is better for a specific task) and using disparity maps to find the differences between images (obviously you need stereo vision).
This last method is really powerful, and probably the image you posted comes from that (even though I cannot understand why the can seems flat; probably it's been flattened later). There is a Matlab toolbox and for sure there are functions ni the OpenCV library for C, C++, Python and Java. Probably the first is the best for embedded implementation.
Best Answer
ITS GETTING LATE; I'll return to edit this later.
[noise floor analysis, to predict minimum energy and thus distance, is at end of this answer]
As part of robot-positioning research, about 30 years ago I was detecting IR LED energy at 30 foot ranges (the distance in my backyard), with detection indicated by an LM567 logic-output enabling a red LED to glow.
My collaborator and I moved around the transmitter, an IR LED of 10 mA diode current (about 15 mW electrical power, so about 1-5 mW optical) emitting in a +- 18 degree beam width, with the 10 mA coming from an NE555 oscillating with some duty cycle at 30 kHz. We moved the transmitter until aligned with the RX sensor both in boresight and angle-of-deflection; at 30 feet, outside in the sunlight, the RED LED would change state, indicating the receiver was "seeing" the transmitter energy.
The receiver has a DC nulling circuit wrapped around a phototransistor, for sunlight attenuation. There were various DC blocking filters between the phototransistor and the first stage of gain, thus sunlight attenuation was not required to be complete, rather the response to DC (sunlight) needed to be adequate to keep the sensor output away from the rails (that is, still able to respond "linearly").
The first stage of gain was discrete, crafted to be "low noise" and to achieve some logarithmic/compression gain curve.
Then some opamp gain (using LM324), about 20 dB, was added.
The detector was the LM567 tone-decoder, potentiometer-set to free-run at/very near the 30 kHz not-really-square wave of the transmitter.
simulate this circuit – Schematic created using CircuitLab
Now you need to compute the noise floor of the phototransistor, using 1 Mohm base resistors. And compute the LM567 noise-bandwidth and signal-noise ratio needed for reliable "logic output".
Given these numbers, we can begin to define "Almost zero".
=============================
Here is my noise analysis. Assume the 1 Mohm in base of the phototransistor sets the noise floor; 1 kohm is 4 nV/rtHz bandwidth, and 100 Hz bandwidth produces 40 nV RMS noise. For 1 Mohm, the noise voltage rises by sqrt(1Meg / 1K) = sqrt(1,000) or 31.5X, to 1.2 uV RMS. Into a short (perhaps the Rin of the transistor's base is quite low), the noise CURRENT is 1.2 uV/1 Mohm or 1.2 pA RMS.
Assume the tone-decoder PhaseLockLoop bandwidth is 100 Hertz. Thus the noise floor is 40 nV RMS at the base of the phototransistor. And assume the PLL works down to 0 dB SNR, or about 40 nV of signal (ignoring RMS/peakpeak factors).
Photodiodes have about 1 : 0.5 conversion efficiency from optical power in watts, to output current in amps. If this also works for a phototransistor (the collector-base junction), and the Rin of the phototransistor for Beta=100 and reac (1/gm) is 52 ohms at 0.5 mA Collector currenet (for Vce ~~ 4 volts), the Rin is 5,200 ohms and we need 40 nV across that Rin. The signal current must be
40 nV / 5,200 ohms which we'll make 40nV/4kohm
thus Isignal is 4e-8 / 4e+3 = 10 pA signal current, or optical power being twice that at 20 pW optical power (energy).
[I'm not bothering to be exact about how the 100 kohm Rbase noise voltage interacts with the device's Rin]
Now we need to take 1mW optical energy from the LED, give a +-18 degree beamwidth, and determine how much of that optical energy is captured by a 4 mm diameter phototransistor lense at 30 feet.
Will we find this energy to be about 20 pW?
The effective radiated power is boosted greatly, because of the LED lense; we'll assume (360/36)^2 or 100X, to 100 mW optical, radiated uniformly.
We need to compute the surface area of a sphere of radius 10 meters, then compute the energy intercepting the Receiver Sensor lense of size 4mm*4mm.
Area of 10 meter sphere is 4 * PI * Radius^2, or 12 * 10meter * 10meter = 1,200 square meters. How many square mm? Scale up by 1,000 * 1,000, to 1,200,000,000 square mm.
Our receiver sensor lense area is 16 square mm. Thus the fraction of emitter energy is 16 / 1,200,000,000 or approximately 1/100,000,000 (no need to be silly about precision, here).
With 1 mW LED power (could be higher, of course), the received power is 0.100 / 100,000,000 or 1/1,000,000,000 or a billionth of a watt. Or 1,000 pW.
And 5 paragraphs back we computed 20 pW.
Anyhow, we now have 2 numbers that define "almost zero". 20 pW, or 1,000 pW.