0 оценок0% нашли этот документ полезным (0 голосов)

7 просмотров19 страниц© Attribution Non-Commercial (BY-NC)

PDF, TXT или читайте онлайн в Scribd

Attribution Non-Commercial (BY-NC)

0 оценок0% нашли этот документ полезным (0 голосов)

7 просмотров19 страницAttribution Non-Commercial (BY-NC)

Вы находитесь на странице: 1из 19

A time-of-flight camera suffers from significant range distortion due to the scattering artifact caused by secondary reflections occurring between the lens and the image plane. The reflected beam from the foreground objects undergoes multiple reflections within the camera device thereby introducing parasitic signals that bias the late-arrival, backscattered signals from the background targets. These additive signals cause degradation of the depth measurement for the farther objects thus limiting the use of range imaging camera for high precision close-range photogrammetric applications. So far the modelling of scattering distortion has been limited to base on the plausible linear system model using inverse filtering approach. The complicated nature of multiple internal reflections of the signals between the lens and the imager unfavourably hinders the use of linear system model for all scattering environment. A simple planar foreground scattering surface infront of another planar surface invalids the use of deconvolution compensation method for reversing the scattering artefact. In the absence of strong physical basis, the only alternative is to heuristically quantify the range distortions through exhaustive experimentations. A simple empirical modelling of range bias due to scattering distortion using analytical curve-fitting method is presented herein. Keywords: Range camera, Scattering, Empirical Modeling, Spline Interpolation, Curve-fitting, Surface-fitting

2 Introduction

Three-dimensional range imaging camera systems are a recent development for close-range terrestrial photogrammetric applications. They operate based on the phase-shift principle to determine the distance between the target and the camera. Each pixel in the sensor frame independently measures distance and amplitude information of the scene which is realized through CCD/CMOS lock-in pixel technology (Lange and Seitz, 2001). Unlike 3D laser scanners, a range camera does not need to sequentially scan its field of view to collect spatial and radiometric information. The range and the amplitude information are obtained simultaneously by sampling the returned modulated optical signal at every element location of the solid-sate sensor. Like other measuring devices, the range cameras also suffer from geometric and radiometric distortions which need to be accounted for through calibration process. The methodology of camera calibration is well established for all 2D imaging cameras, however, the 3D range imaging cameras are not able to be calibrated efficiently using traditional approach due to complicated systematic biases such as scattering effect on the range measurements. Research is

underway for developing a calibration procedure for the range cameras by incorporating range measurements in a self-calibration approach (Lichti, 2008) or by separately modelling the range distortions beside performing standard digital camera calibration (Jaakola et al, 2008; Kahlmann and Ingensand, 2007). On the contrary, only few attempts are being made to quantify the scattering effect caused by the multiple signal attenuation. Kavli et al (2009) and Dubois et al (2007) have published few results on compensation of the scattering bias using inverse filtering approach, where they basically use trial and error method of defining the inverse filter based on Gaussian or empirically defined point-spread function (PSF) approximation. Nonetheless, the linear system model presented by them is questionable due to complicated and uneven nature of the occurrence of multiple internal reflections inside the camera system. Therefore in the absence of strong physical basis, the only alternative is to empirically formulate the range distortions through exhaustive experimentations. (Kahlmann, 2006; Dubois et al., 2007; Kavli et al., 2009)

Section 1 presents a brief description of the nature of complications of multiple internal reflections that is occuring within the camera device. Section 2 shows the results of scattering distortions as a function of different parameters. Section 3 outlines empirical method of compensating scattering distortion using analytical curve-fitting method. Section 4 presents results from SR4000 range camera.

3.1 Principle of Scattering Effect

In ToF cameras, the range information is obtained from measuring the time delay of the modulated received signal. A time delay of the signal received corresponds to a phase-shift of the signal which is the fundamental principle for phase-shift time-of-flight range measurement. The range of the object from the sensor is calculated using Equation 1, where is the range for given pixel, is the modulation frequency of infrared light, is the phase delay of the received signal, and i, j is the pixel location.

The angular phase-shift for the closer objects will be lower than the phase-shift for the longer object as shown in Figure 1, where A,B and C are the angular phase offsets, and the are the corresponding ranges of the point Q, Q and O respectively.

The point Q is the displaced point Q when a brighter closer object O is present in the imaging scene which causes scattering range effect on the background object point due to multiple internal reflection of the signals within the camera system as shown in Figure 2. The multiple internal reflections of the early arrived signals from the foregreound objects attenuates the weak signal from the background object thus lowering its angular phase offset causing shortened range measurement. The exact nature of multiple internal reflection of the signals is difficult to describe with a physical model because of the scattering phenomenon is highly imaging scene variant. Figure 2 hows a schematic representation of how multiple internal light reflections from the closer object could attenuate the actual signal from the far object. The difficulty of measuring the signal attenuation by unknown number of internal reflections poses limitation to any perceived physical model of the scattering artefact. For instance point P can be attenuated with multiple signals due to internal reflections.

According to Dubois et al. (2007), scattering problem is loosely expressed as a convolution of the input signal with the impulse response of the system in presence of the scattering bias. For the scattering phenomenon occurring in the 3D range cameras, the measured signal, in every pixel is equal to the convolution of the input signal, and the point-spread function (PSF) of the camera including the scattering bias, as modelled in Equation 2.

The solution to this problem explicitly requires modelling of the scattering PSF ( of the camera so that a method of deconvolution can be employed to undo the effect of the scattering. They used a linear shift-invariant system model to quantify the scattering bias using a blinddeconvolution approach where the point-spread function of the camera including scattering bias is plausibly defined by a trial and error method using a Gaussian approximation. This method is limited due to the non-idealization of point source of light from the camera for measuring the point-spread function, which is fundamental in the linear-system model. Kavli et al. (2009) uses the same approach of linear system model to compensate for the scattering distortions in ToF cameras using generally shaped empirical models for the point spread function. Nonetheless, the linear system model presented by them is questionable due to complicated nature of the occurrence of multiple internal reflections inside the camera system. An edgespread function (ESF) experiment using two planar objects separated at certain distance invalids the use of linear system model for compensating scattering distortion. Figure 3 show the intensity

image of the scattering scene where a foreground object (a plane board) is placed one and half meters away from the background object (a wall).

The superimposed image point clouds of the scattering experiment scene of with and without the presence of the scattering object (a foreground planar board) are shown in Figure 4. The linear band of points at the upper portion in Figure 4 is the image point clouds of the object scene with the background wall. The stepped-like band of points is the point clouds of the object scene with a planar board placed in front of the wall, with about fifty percent of each surface appearing in the sensor plane.

Range (mm)

object

object -1000 -500 0 X (mm) 500 1000 1500

It is clearly visible in Figure 4, the displacement of the background wall towards the camera when another object (a plane board) is introduced in the imaging scene. This is caused by the scattering phenomenon. Dubois et al. (2007) and Kavli et al. (2009) defined scattering compensation model with a physical model based on the linear shift-invariant system. The linear system model typically requires defining or measuring the point-spread function (PSF) accurately in order to successfully undo the filtering operation. However, direct measurement of the scattering PSF is impossible owing to the non-idealization of point source of light from the camera. Often line spread function (LSF) or edge spread function (ESF) is measured to deduce PSF indirectly. Figure 5 shows the superimposed theoretical and measured ESF obtained from the scattering experiment. The dotted line represents expected theoretical ESF, whereas the solid line represents the measured ESF obtained by fitting a curve on one row pixel from the step-liked band of points shown in Figure 4.

Range (mm)

-300

-200

-100 X (mm)

100

200

Preliminary results from the ESF experiments indicated that the measured ESF is not conforming to the expected theoretical ESF of the scattering distortion profile. This ESF anomaly questions the relevance of the linear system model for the scattering phenomenon portraying in the 3D range camera. The scattering phenomenon is a complicated event due to multiple reflections occurring inside the camera which may not be correctly described by the linear system model. In the absence of concrete physical scattering model, it is imperative to explore the empirical methods of modelling scattering distortion through exhaustive experimentations. This paper presents an empirical modelling of range bias due to scattering distortion using analytical curvefitting method. (Note: Figure 4 and 5 can be placed together in Matlab).

4.1 Two Planar Objects Experiment

SR3000 and SR4000 range cameras were used for exhaustive experimentation for modeling the scattering artefact. The experimentation involve imaging of a planar surface wall with and without the presence of another highly reflective planar object positioned at incremental lateral and forward distances from the range camera. Five forward distance position were chosen at 140 to 300 centimeters with 40 centimeters interval from the range camera. Nine lateral positions were chosen at each forward distance location where images were taken with different percentage of surface area of the foreground object from 10 to 90 percent occlusion with 10 percent interval. The experiment was conducted only up to four meters of the camera range

because of the unavailability of a large foreground planar object which is required to cover whole sensor frame by the foreground object. At each location, multiple images were taken to reduce the random noise of the system. Images were captured at four different integration times at 10 ms, 24 ms, 36 ms and 51.2 ms for all scenes. Figure 6 shows the experimental set up for the scattering experiment. A white projector screen of size 96 inch square was placed in front of the background wall surface, which acts as a scattering object.

4.2.1 SR3100 SwissRanger Camera

The scattering artefact in SR3100 range camera is portrayed in the form of range and amplitude bias on the background objects. The measured range and amplitude for the background objects in presence of scattering foreground object is lower than the actual measurements without the scattering object. Following figures clearly shows the effect of scattering artefact of the range camera. Figure 7 shows the range and amplitude bias of one row of pixels as a function of the surface area of the scattering object (camera distance need to be noted). The camera and the background object are fixed at 380 cm. The scattering object is laterally moved at ten percent increment located at a distance of 140 cm from the camera. Scattering edge is defined as the location of the edge of the foreground object where it overlaps with the background object. The

trend is clearly visible where the range and amplitude bias are increasing with the increase in the surface area of the scattering object. Additionally, the range and amplitude bias have a clear linear relationship which suggests proportional change in the phase shift of the attenuated signal as a result of the amplitude attenuation caused by the scattering artefact.

600 500 400 90 % 80 % 70 % 60 % 50 % 40 % 30 % 20 % 10 % 300 250 200 90 % 80 % 70 % 60 % 50 % 40 % 30 % 20 % 10 %

Amplitude bias

8

4 X (mm)

4 X (mm)

Figure 7: Range and amplitude bias as a function of the surface area of the scattering object

600

Amplitude bias

2 X (mm)

2 X (mm)

Figure 8: Scattering induced bias when camera and scattering object is at 300 and 200 cm from the background object

Amplitude bias

600

600 400

400

2 X (mm)

2 X (mm)

Figure 9: Scattering induced bias when camera and scattering object is at 380 and 200 cm from the background object

Figures 8 and 9 show the range and amplitude bias as a function of the integration time used for the data capture. All four different integration time for different scattering scene environment produces almost same random range bias. This suggests that scattering effect on range measurements is invariant of the integration time. However the apmlitude bias due to scattering is dependent on the integration time. The monotonicity relationship between the integration time used and the scattering induced amplitude bias is clearly observed which is because of the more number of photons impinging on the sensor receptor at higher integration time thus recording high amplitude value.

300 250 200 300 cm 260 cm 220 cm 180 cm 140 cm 500 Scattering Edge 300 cm 260 cm 220 cm 180 cm 140 cm Scattering Edge

400

Amplitude bias

0 1 2 X (mm) 3 4 5

200

100

-100

2 X (mm)

Figure 10: Range and amplitude bias as a function of the distance between the camera and the scattering object

Figure 10 portrays range and amplitude bias of one central row of pixels as a function of the distance of the scattering object from the camera. The camera and the background object are fixed at 380 cm. The scattering object is moved at 40 centimeters increment from 140 to 300 centimenetrs from the camera. It is observed that the range and amplitude bias increases with the increasing distance between the background and the scattering objects. This is expected because

the power of the signals decays as an inverse square of the distance. When scattering object is closer to the camera relative to the background object, the reflected light from the scattering object has more power than the reflected light from the farther object causing greater signal attenuation resulting into proportional scattering bias.

Figure 11: Range bias when camera at 140cm and 380 cm from the scattering and background object respectively

Figure 12: Range bias when camera at 240cm and 380 cm from the scattering and background object respectively

Range bias due to scattering is the subject of interest in this study. Dubois et al. (2007) and kavli et al. (2009) reported a maximum of 40 centimeters of range bias due to scattering based on their experiment. On the contrary, this study has shown that the scattering induced range bias could reach up to 250 centimeters in presence of a highly reflective large surface area scattering object when the scattering and the background objects are separated at appreciable distance as shown in Figure 11, left image. Figures 11 and 12 show the variation in range bias at different scattering scene environment. In both the figures, it is clearly visible that the scattering induced range bias

is more in the periphery than the inside portion of the image plane. This is due to the greater power loss of the reflected signal at the periphery than the middle portion of the target scene. It has been reported by Jaakola et al. (2008) on the additional power loss of the SR3100 range camera at the periphery besides the cosine-fourth power loss observed in the standard optical system.

In Figures 7 to 12, it is observed that the scattering induced bias is greater than the inherent system noise as depicted in Figure 13, where twenty consecutive images was used to evaluate the range and amplitude biases of the fixed target scene. The range noise is within 400 mm and amplitude bias is within 300 16-bit quantized values.

4.2.2

Chiabrando et al. (2009) reported absence of scattering distortions for SR4000 range camera which is a fourth generation range camera after SR3100. Similarly, this study also did not observe any significant range or amplitude bias unlike the scattering induced distortions observed in the previous generation range camera. Figures 14 and 15 shows the scattering induced range and amplirtude bias for SR4000 range camera which proves that the scattering artefact is greatly reduced or eliminated as they are within the range of the noise of the consecutive images of the range camera shown in Figure 15. Tewnty consecutive images were used to evaluate the differences of range and amplitude measurements of the same target scene for the two images taken consecutively. The range noise is within 100 mm and amplitude noise is within 1000 16-bit quantized values. It is however unknown how the scattering artefact in SR4000 is rectified or atleast minimised within the noise of the system by the manufacturer, either through software implementation or hardware consolidation.

Amplitude bias

50 % 40 % 30 % 20 % 10 %

4 X (mm)

-400

4 X (mm)

Figure 14: Range and amplitude bias as a function of the surface area of the scattering object for SR4000

30 300 cm 210 cm 160 cm 500 400 300 200 100 0 -100 300 cm 210 cm 160 cm

20

Scattering Edge

10

-10

-20

2 X (mm)

Amplitude bias

Scattering Edge

2 X (mm)

Figure 15: Range and amplitude bias as a function of the distances of the scattering object from the camera for SR4000

5 Methodology

5.1 Smoothing: 3D Surface Fitting

A smoothing ridge estimator is used to fit the 3D surface on a 2D grided data points. The elevation at the unknown data points are estimated using a linear interpolation on a triangular mesh. Equation 2 gives the linear interpolation method for a traingulated surface depicted in Figure 17.

The smoothing is achieved by a spring-beam approximation where the data points are connected to the flexible beam by springs. A flexible beam is gernerally modelled with a cubic spline where the minimization function is the potential energy stored in the extended springs which is due to both extension of the springs and the bending of the beam. The potential energy in the spring is equal to the square of the length of the extended spring which is equivalent to the L2 norm. But the L2 norm is variable in accordance to the smoothing constraint. Thus the ridge estimator used for fitting a homeomorphic triangular mesh is biased towards smoothing which is achieved by weighting the L2 norm. Figure 18 shows the schematic description of the spring-beam approximation on a scattered discrete data points.

An approximation of the 3D surface greatly reduces the noise of the system as shown in Figure 19. The proposed scattering compensation model is based on the smoothed values of the scattering induced range bias.

1200 1000

1000 800

From surface fitting, range bias surfaces are obtained for different positions of the scattering object. Then a piecewise cubic polynomial interpolation is used to approximate the value for the required distance using the corresponding pixel values from all the available smoothed surfaces. Mathematically, the cubic spline is defined by the cubic polynomial in Equation 3 which is defined for each sub-interval where i = 1, 2,, N-1. For N number of points, there are N-1 cubic polynomials with 4(N-1) parameters.

Constraining Equation 3 with the following four conditions gives a unique solution. the spline pass through all data points first derivative is continuous at all interior points second derivative is continuous at all interior points boundary conditions at end points are known When the boundary conditions at the end points are defined as in Equation 4, the spline is called Not-a-Knot Spline.

When the first derivatives at the end points are fixed to a scalar values then it is called clamped spline. Not-a-Knot spline is chosen over the clamped spline because it fitted the data points better than the clamped spline as shown in Figure 20.

Spline Fit 700 600 Data Points Not-A-Knot Spline Clamped or Complete Spline

20

Figure 20: Cubic spline fitting

70

A simple scattering compensation model based on the analytical curve-fitting method is proposed. Two different compensating models are used to compensate for the scattering induced range bias for the background object. The flowchart shown in Figure 21 describes the two models.

Figure 21: Flow diagram of the two different scattering compensation models

Model I is a local compensation model defined only at a particular distance from the camera where lateral scattering scenes are available at different percentage surface area of the scattering object. The approximation of the required surface at particular surface area of the scattering object is achieved by calculating values at each pixel location using the corresponding pixels values of the smoothed surfaces available at different location of the scattering object using nota-knot spline interpolation. Figures 22 and 23 show the superimposed of actual and approximated surface, and success rate of the scattering compensation at 53.4 % and 55.7% surface area of scattering obeject when the camera and the scattering object is at 380 and 160 centimeters, and 380 and 200 centimeters respectively from the background object. The histogram is obtained by computing the percentage difference of range bias between the actual and compensated surfaces. Over all there is an achievement of scattering compensation from 40 % to more than 90% observed for the five interpolations carried out in this experiment.

Superimposed of actual and corrected surface 3000 % Top: Interpolated surface 200 % Bottom: Actual surface Success rate of scattering compensation % Over Compensated < 0 % Under Compensated > 0

2500

Range bias(mm)

150

2000

Count

4 2 Y(mm) 0 0 2 X(mm) 4

100 50

1500

1000 0 6 500

0 -40

Figure 22: Camera and scattering object at 380 and 160 cm respectively from the background object

Superimposed of actual and corrected surface 4000 % Top: Actual surface 600 % Bottom: Interpolated surface 3500 3000 2500

Success rate of scattering compensation % Over Compensated < 0 % Under Compensated > 0

Range bias(mm)

500 400

Count

4 2 Y(mm) 0 0 2 X(mm) 4

-10

-5 0 5 Percent Compensated

10

15

Figure 23: Camera and scattering object at 380 and 200 cm respectively from the background object

Model II is a general compensation model defined at all distances for all percentage surface area of the scattering object from the camera where both lateral and longitudinal scattering scenes are available. The approximation of the surface at the required distance from the camera for particular surface area of the scattering object is achieved in two steps: firstly, an intermediary surfaces at required percentage surface area of the scattering object is computed using the lateral scenes; secondly, from the intermediary surfaces obtained at the desired percentage surface area of the scattering object, a new surface is interpolated at required longitudinal distance of the scattering object from the camera. Figure 24 shows the superimposed of actual and approximated surface, and success rate of the scattering compensation at 53.4 % surface area of the scattering obeject when the camera and the scattering object is at 380 and 120 centimeters from the background object using all the available longitudinal and lateral dataset. This compensation model corrected from 20% to 80% of the scattering distortion. The low success rate of compensation is due to only sparse dataset was available for the interpolation which biased the spline model towards over estimation.

Superimposed of real and interpolated range bias % Top: Interpolated surface % Bottom: Actual surface 250

Success rate of scattering compensation 3500 3000 2500 2000 1500 1000 4 500 0 -100 % Over Compensated < 0 % Under Compensated > 0

Range bias(mm)

Count

50

Figure 24: Scattering compensation approximated for scattering obeject at 260 cm from the camera using all the dataset