Вы находитесь на странице: 1из 8

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

available at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/compag

A dual-spectral camera system for paddy rice seedling row detection


Yutaka Kaizu a, , Kenji Imou b
a b

Graduate School of Agriculture, Hokkaido University, Kita-9, Nishi-9, Kita-ku, Sapporo-shi, Hokkaido 060-8589, Japan Graduate School of Agriculture, The University of Tokyo, 1-1-1, Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan

a r t i c l e
Article history:

i n f o

a b s t r a c t
A new method is presented for detecting rows of rice seedlings to facilitate the navigation of rice transplanters. Generally, an independent NIR or RGB camera is used as a vision sensor for agricultural vehicles; however, strong reections on the water surface make row detection more difcult in ooded paddy elds compared to dry elds. To solve this problem, we developed a dual-spectral camera system that consists of a pair of low-cost monochrome cameras with optical lters. Different wavelength images of the same location can be taken simultaneously in real-time. An experiment conducted under cloudy conditions showed that this system could reduce water surface noise and clearly detect seedling rows. 2008 Elsevier B.V. All rights reserved.

Received 1 March 2007 Accepted 8 January 2008 Keywords: Image segmentation Near-infrared Rice seedling Paddy eld Dual-spectral camera Row detection

1.

Introduction

The image segmentation of plants from background in agricultural elds is an essential technique for estimating the quantity of nitrogen in plants (Tian et al., 2005), predicting yield (Chang et al., 2005), detecting weeds, measuring plant growth and navigating agricultural vehicles (Kaizu et al., 2004). However, it is not easy to determine whether the pixel of interest belongs to plant or background under time-variable outdoor light conditions. Furthermore, detecting seedlings in a paddy eld is more difcult than in a general eld because the sun, clouds, blue sky, and other objects outside the paddy eld (buildings, trees and mountains) are strongly reected on the water surface. A machine-vision system that works well under changing light conditions has long been needed. Research on the image segmentation of plants can be divided into two categories. One is research using an RGB (red, green and blue) color camera. In a eld, the plants are green

and the background soil is black or brown. This difference is utilized for segmentation. Nowadays, an RGB image can be easily obtained using a home camcorder, digital still camera, security camera or industrial camera; therefore, it is a popular method with many researchers. Philipp and Rath (2002) transformed RGB color space into various color spaces such as i1i2i3, i1i2i3 new, HSI, HSV and LAB and then compared the separation capability. They concluded that the linearly transformed i1i2i3 new color space was the best choice for binarization. Tian and Slaughter (1998) developed an environmentally adaptive segmentation algorithm to locate tomato seedlings. Steward et al. (2004) distinguished plants from background by reduced dimension clustering. Kusano et al. (1995) applied HSI transformation and principal component analysis. Their method worked very well under changing light conditions. The second research category uses a monochrome camera and arbitrary band-pass lters to enhance the contrast

Corresponding author at: Kita-9, Nishi-9, Kita-ku, Sapporo-shi, Hokkaido 060-8589, Japan. Tel.: +81 11 706 2568; fax: +81 11 706 2568. E-mail address: kaizu yutaka@yahoo.co.jp (Y. Kaizu). 0168-1699/$ see front matter 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.compag.2008.01.012

50

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

between plants and background. An NIR lter is commonly used with other band-pass lters, based on the fact that healthy green plants reect considerable NIR light from 800 to 1400 nm and soil reects little NIR light. Moreover, since healthy plants have little reection in the red wavelength compared with soil and unhealthy plants, the ratio of NIR and red reection can be taken as a vegetative index in many cases. NDVI (normalized differential vegetation index) is often used for investigating plant density by satellite and airplane remote sensing. Han et al. (2004) used a band-pass lter having a center wavelength of 800 nm to simply and effectively segment an image into crop rows and ground. Marchant et al. (2001) attached a wheel lter to take three-band (green, red and NIR) images and realized segmentation by combining the images. However, because it was necessary to turn the wheel lter several times for one multi-spectral image, the system was not capable of taking images in real-time. A 3-CCD-type multi-spectral (NIR, red and green) camera used to be sold commercially. Although it was capable of taking three different spectral images at once, it was too expensive to be used as a vehicle navigation sensor. Lau and Yang (2005) developed a multi-spectral imaging system using an array of commercial monochrome cameras to produce the exact colors of objects. They arranged ve cameras in parallel using different spectral lters and synthesized the images on the assumption that an object is a plane and is perpendicular to the light axis of the cameras. However, if the object plane inclined towards the light axis or the object was not a plane, unwanted parallax occurred. Pioneer research in paddy eld rice seedling detection was conducted by Watanabe et al. (1997) and Chen et al. (1997, 1999). They utilized color and morphological characteristics of rice seedling rows. Takahara et al. (2004) succeeded in controlling an autonomous weeding vehicle using machine-vision. An RGB camera was used for detecting seedlings. We developed an auto-steering system for a rice transplanter (Kaizu et al., 2004) using an RGB camera system. We found that the B* component of L*A*B* color space best represented the color of the seedlings. We also found from the results of simulation and experiment that in order to minimize

heading angle errors, the camera should look as far into the distance as possible. The prototype machine-vision navigation system succeeded in making the transplanter run straight along a seedling row. However, the system did not always succeed in nding seedlings. One reason was that if the camera looked far in the distance, superuous objects were reected on the water surface and interfered with correct segmentation. Another reason was that the B* component, which ideally was independent from the brightness, changed as the cameras exposure time and lens aperture changed. In a paddy eld, since the brightness of the water surface is much higher than that of the seedlings, if we set the exposure for the surface, the seedlings lose their color and appear almost black. To overcome these problems, we decided to use NIR and red images simultaneously. The objectives of this research are as follows: (1) To develop a dual-spectral imaging system using low-cost monochrome cameras. (2) To develop a simple and robust algorithm for detecting rows of rice seedlings in a paddy eld.

2.
2.1.

Materials and methods


Dual-spectral camera system

Fig. 1 shows a schematic diagram of the dual-spectral camera system. This system consists of a personal computer (Pentium 4 3.3 GHz, 1 GB RAM), two WAT-535EX monochrome CCD cameras (Watec Co. Ltd,) and two PCI-1409 frame grabber boards (National Instruments Corp.). The PCs OS is Windows XP (Microsoft Corp.). We developed an image processing application using National Instruments LabVIEW 8.0 and Microsoft Visual C++ 6.0. We synchronized the two cameras using a composite video signal to grab both images simultaneously. The resolution and bit depth of an image is 640 480 and 8 bits, respectively. Fig. 2 shows an image of the camera unit. The two cameras were arranged at right angles. Camera 1 has a

Fig. 1 Schematic diagram of dual-spectral camera system.

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

51

Fig. 3 Flow chart of segmentation. Fig. 2 Image of camera unit. We could infer from this table that pixels fullling the following conditions belong to seedlings: (a) the intensity in the NIR image is much higher than that in the red image; and (b) NIR intensity is high. If we use only the rst condition, tree leaves might be detected. With only the second condition, the reection of the clouds, sky and sun on the water surface should be detected. To extract only the seedlings from the image, we followed the procedure shown in Fig. 3.

SC62 short-cut lter (Fujilm Co. Ltd.) and an IR cut-off lter to allow only red light. Camera 2 has an IR76 sharp-cut lter (Fujilm Co. Ltd.) to capture NIR images. The focal length of the lens is 8 mm. The horizontal and vertical eld angles of the lens are 33.1 and 25.0 , respectively, when used with a 1/3-in. CCD. An acrylic glass half-mirror divides incoming light between the NIR camera and red camera. It transmits 70% of the incoming light and reects the rest. Since the reected image is inverted, the image processing application turns it over for matching. To align the optical axes of the cameras, the NIR camera is mounted on a TD-603 rotary tilting stage (Chuo Precision Industry Co. Ltd.). This stage was able to adjust the roll, pitch and yaw angle from 3 to 3 . A light shield covers the mirror to prevent the entry of light from the side.

2.2.1.

Standardization

2.2.

Segmentation

In addition to the seedlings, various other objects appear in the image of the paddy eld. Table 1 shows general objects and their spectral reective characteristics.

We properly adjusted the lens aperture and exposure time for both cameras individually by hand to prevent saturation of data. After taking the images, we then calculated the mean values of each image. Next, we linearly transformed the histograms of the images to produce a mean value of 50. As stated earlier, the soil, clouds, blue sky, sun and buildings that occupied almost the entire image reect nearly the same amount of NIR and red light. As a result, this operation could highlight the seedlings. An RGB camera generally adjusts the white balance by leveling the mean values of the three-color channels in a similar way. Fig. 4 shows a standardized NIR image and Fig. 5 shows a standardized red image. A comparison of the two shows that

Table 1 Reectivity of general objects observed in the image of the paddy eld NIR, >760 nm
Directly observed Seedlings Straw, residue Soil on the water surface Soil seen through the water Observed in the reection Clouds Blue sky Sun Tree leaves Tree trunk Buildings High High Low Low

Red, 660 nm
Medium High Low Low

High High High High Low Medium

High High High Low Low Medium

Fig. 4 Standardized NIR image.

52

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

Fig. 5 Standardized red image.

the brightness of the seedlings is high in the NIR image and low in the red image. Besides the seedlings, weeds and trees have the same characteristic. These images were taken on a slightly cloudy day.

Fig. 7 Mask image.

2.2.3.

Thresholding

2.2.2.

Subtraction

To execute image processing in real-time (30 fps), a simple method is desirable. We subtracted a standardized red image from a standardized NIR image. Isub (i, j) = INIR (i, j) Ired (i, j) (1)

whereINIR is the pixel intensity of standardized NIR image, Ired the pixel intensity of standardized red image and (i, j) are xand y-coordinates. The results of subtraction are shown in Fig. 6. We can clearly locate the rows of seedlings. In the left and right parts of the image, some thin lines can be seen. These lines were produced because the two images did not match well. We deliberately adjusted the light axis of both cameras using the rotary tilting stage, but the error could still be observed. However, the gap range was within a few pixels maximum.

We have prior practical knowledge of the physical relationship between the camera unit and the seedling rows when we transplant seedlings. For example, in Fig. 6, the transplanted rows can be seen only in the right half of the image. We applied a mask image to extract only the part of the rows before thresholding. Fig. 7 shows a mask image for a right camera image. In this experiment, the shape of the mask image was xed. For the image taken by the camera attached on the other side of the transplanter, a symmetrical mask image would be used. As we tilted the camera unit forward, the brightness of the water surface changed with the distance from the camera. This is because reectivity changes as the incident angle changes. If we apply one threshold value solely to binarize the image, only a particularly concentrated area may be detected. To avoid this problem, we split the image into ten stripes and determined the threshold value for each band. We used the p-tile method to determine the threshold value (Nakamura, 2005). First, we make a histogram of pixel intensity. If the percentage of an area of pixels that has a higher intensity than a certain value is p or more, we let the value be a threshold value (Takagi and Shimoda, 2004). Although this method is very simple, it is advantageous for detecting small but bright seedlings in an image. In order to ensure successful binarization, we have to know the percentage beforehand. We adopted 5% as the p-value, derived from the 3D scene simulation of a paddy eld (Kaizu et al., 2004). Fig. 8 shows a thresholded image with subtraction. The seedlings are well-detected independent of the distance from the camera system.

2.2.4.

AND operation

Fig. 6 Subtracted image.

As previously stated, a tree crown has similar spectral characteristics to that of seedlings. To eliminate it from the image, we used logical multiplication of thresholded images with a subtracted image and an NIR image. Fig. 9 shows the threshold image made from an NIR image. Fig. 10 shows the results of AND operation. In this example, the

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

53

Fig. 8 Thresholded image with subtracted image.

Fig. 10 Final segmented image.

AND operation was able to remove the poles on the right ridge.

3.

Experiment

We ran tests on the dual-spectral camera system on 9 March 2006 in the experimental eld of the University of Tokyo. It was a cloudy day. The paddy eld is 50 m 30 m and there are buildings and trees around the eld. We installed the camera system on the side of a prototype rice transplanter. The height of the camera system was 0.6 m and the depression angle 13 . We changed the side of the camera system according to the side of the seedling rows. The prototype transplanter is shown in Fig. 11. A PC and a monitor were set on the transplanter. Fig. 11 Image of prototype transplanter.

4.

Results and discussion

To conrm the feasibility of a dual-spectral camera system, we compared the results obtained from only the NIR camera with those from the dual-spectral camera.

4.1.

Elimination of sky reection

Fig. 9 Thresholded image with NIR image.

Fig. 12 shows one series of results. In Fig. 12(c), the reection of the sky was incorrectly detected on the right side of the rows of seedlings. This was because the percentage of seedling area contained within the stripe was lower than 5%. By implementing AND operation between Fig. 12(c) and (d), these pixels were deleted. Additionally, the noise in Fig. 12(b) was also reduced. In order to make the transplanter run automatically and straight along the row, we have to know the angle and displacement between the transplanter and the nearest row.

54

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

Fig. 12 Comparison of three segmentation methods for eliminating reection of the sky: (a) NIR, (b) binary subtracted, (c) binary NIR and (d) AND-operated.

It is expected that this preprocessing will facilitate detection of the nearest row.

4.2.

Elimination of tree crowns

Fig. 13 shows the images in which the tree crowns were reected on the water surface. In Fig. 13(b), a portion of the

tree crowns can be seen in the upper part of the image. In Fig. 13(c), a smaller area of tree crowns was detected. This is because the tree crown had lower intensity in the red image compared to the seedlings. In general, the reection of leaves of an herbaceous plant is stronger than that of a woody plant in a visible and a near-infrared range. This characteristic was used. The nearest leftmost rows are obvious in Fig. 13(d).

Fig. 13 Comparison of three segmentation methods for eliminating tree crowns: (a) NIR, (b) binary subtracted, (c) binary NIR and (d) AND-operated.

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

55

Fig. 14 Comparison of three segmentation methods at dusk: (a) NIR, (b) binary subtracted, (c) binary NIR and (d) AND-operated.

4.3.

Inuence of sunlight intensity

A human operator usually works from morning to dusk. To alternate the human operator with a machine-vision automated driving system, the machine-vision system must be able to detect the rows under low sunlight intensity. We took images at sunset when sunlight intensity was 240 lx. Fig. 14 shows the results. Although it was dark, the rows were segmented correctly. Even after sunset, we presume that detection of seedlings is possible with an articial lighting like an infrared LED or an halogen lamp.

the water surface, the mud could be properly segmented with the subtraction operation. (d) We could successfully detect the seedling rows under cloudy conditions from noon to dusk.

Acknowledgements
We express our appreciation to Noboru Washizu, Shizue Kondo, Hirohi Kimura and Ryuichi Soga, the technical staff of the experimental eld of the University of Tokyo, for helping with the equipment preparation. We also thank Kubota Corp. for lending us a rice transplanter.

5.

Conclusions
references

The conclusions derived from this research are as follows: (a) Using a pair of low-cost monochrome cameras with different spectral lters and a PC, a real-time dual-spectral camera system could be developed. (b) An NIR image and a red image could be well matched by adjusting the light axis of the NIR camera. The displacement between the two images was within a few pixels at the maximum. (c) By applying AND operation between two thresholded images made from an NIR image and a subtracted image, the noise originating from the reection of the sky and tree crowns could be eliminated. When the water was shallow, mud emerged above the water surface. However, it did not reect less NIR and red light than the seedlings reected. Even if the mud was very wet and it exhibited specular reection, since it showed same characteristic as
Chang, K., Shen, Y., Lo, J., 2005. Predicting rice yield using canopy reectance measured at booting stage. Agron. J. 97 (3), 872878. Chen, B., Watanabe, K., Tojo, S., Ai, F., Huang, B.K., 1997. Studies on the computer-eye of rice transplant robot. Part 2. Detection of rice plants line using hough transformation. J. Jpn. Soc. Agric. Mach. 59 (3), 2328. Chen, B., Tojo, S., Watanabe, K., Ai, F., Huang, B.K., 1999. Detection of rice seedlings in the image of paddy eld-image processing method excluding the effect of light reection. J. Jpn. Soc. Agric. Mach. 61 (5), 5763. Han, S., Zhang, Q., Ni, B., Reid, J.F., 2004. A guidance directrix approach to vision-based vehicle guidance systems. Comput. Electron. Agric. 43 (3), 179195. Kaizu, Y., Yokoyama, S., Imou, K., Nakamura, Y., Song, Z., 2004. Vision-based navigation of a rice transplanter. In: Proceedings of the CIGR International Conference, (CD-ROM).

56

c o m p u t e r s a n d e l e c t r o n i c s i n a g r i c u l t u r e 6 3 ( 2 0 0 8 ) 4956

Kusano, N., Inou, N., Kitani, O., Okamoto, T., Torii, T., 1995. Image analysis of crop row used for agricultural mobile robot. Part l. Binarization by color analysis. J. Soc. Agric. Mach. 57 (4), 3744. Lau, D.L., Yang, R., 2005. Real-time multispectral color video synthesis using an array of commodity cameras. Real-Time Imag. 11 (2), 109116. Marchant, J.A., Anderson, H.J., Onyango, C.M., 2001. Evaluation of an imaging sensor for detecting vegetation using different waveband combinations. Comput. Electron. Agric. 32 (2), 101117. Nakamura, Y., 2005. Automatic Steering control of a rice transplanter using machine-vision. MS Thesis. The University of Tokyo, The Graduate School of Agricultural and Life Sciences. Philipp, I., Rath, T., 2002. Improving plant discrimination in image processing by use of different colour space transformations. Comput. Electron. Agric. 35 (1), 115.

Steward, B.L., Tian, L.F., Nettleton, D., 2004. Reduced-dimension clustering for vegetation segmentation. Trans. ASAE 47 (2), 609616. Tian, L.F., Slaughter, D.C., 1998. Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput. Electron. Agric. 21 (3), 153168. Tian, Y., Zhu, Y., Cao, W., 2005. Monitoring leaf photosynthesis with canopy spectral reectance in rice. Photosynthetica 43 (4), 481489. Takagi, M., Shimoda, H., 2004. Image binarization. In: Handbook of Image Analysis, Revised edition. University of Tokyo Press, Tokyo, Japan, pp. 15191523. Takahara, S., Sogo, K., Yamaura, K., 2004. Autonomous vehicle for paddy eld weeding by image feedback control. J. Jpn. Soc. Agric. Mach. 66 (2), 4554. Watanabe, K., Chen, B., Tojo, S., Ai, F., Huang, B.K., 1997. Studies on the computer-eye of rice transplant robot. Part 1. Detection of the aimed rice plants with image processing. J. Jpn. Soc. Agric. Mach. 59 (2), 4955.

Вам также может понравиться