学位論文要旨



No 125851
著者(漢字) 李,明
著者(英字)
著者(カナ) リ,メイ
標題(和) 全方位カメラによる農業用車両のナビゲーションのためのロカライゼーションシステムに関する研究
標題(洋) Study on Localization System for Agricultural Vehicle Navigation Using Omnidirectional Vision
報告番号 125851
報告番号 甲25851
学位授与日 2010.03.24
学位種別 課程博士
学位種類 博士(農学)
学位記番号 博農第3551号
研究科 農学生命科学研究科
専攻 生物・環境工学専攻
論文審査委員 主査: 東京大学 准教授 芋生,憲司
 東京大学 教授 横山,伸也
 東京大学 教授 酒井,秀夫
 東京大学 教授 大政,謙次
 東京大学 教授 大下,誠一
内容要旨 要旨を表示する

With the problem dwindling in numbers of farm labor force and satisfying with precision agriculture necessary, agricultural vehicle automation is becoming more important. GPS is the most popular method for agricultural vehicle navigation. However, there are some limitations. First, its accuracy depends on the position of the satellites. In rural environments, especially in valley, hills or trees can obscure the microwave beams from satellites, resulting in a considerable drop in accuracy. To overcome this problem, the GPS sensor must be fused with other sensors, such as dead-reckoning sensors and machine vision sensors. Second, kinematic GPS for agricultural application is very expensive. Machine vision is also a popular method and other methods like GDS are not matured for application. Machine vision is a kind of cheaper and passive sensor, which has some excellent computer algorithms and matured success researches to support. GPS guidance system provides an absolute guidance system based on GPS base station on the ground, which is not affected by environments varying. The best solution on technology is a guidance system fusing with the technologies of GPS and machine vision.

Recently, omnidirectional vision sensors are very attractive for autonomous navigation system. An omnidirectional vision sensor is cheap and simply composed of a digital camera aiming at a catadioptric mirror. The images (obtained without rotating the robot) are 360° view of the environment and therefore are not sensitive to wheel slippage and small vibrations. Although it is not straightforward to obtain distance estimations from an omnidirectional image due to shape of the mirror, the apparent angles of objects from the robot are relatively accurate and easy to derive from the image.

In order to compensate for GPS that can use in the places where hills or trees obscure the microwave beams from satellites, resulting in a considerable drop in accuracy and develop a localization system substitute for GPS is used in the forage production, in greenhouse and apply for precision agriculture. We developed a new localization system based on low-cost omnidirectional vision and artificial landmarks which estimates an absolute position relative to the landmark-based coordinate system on the ground. In this work, we used an integrated-type omnidirectional vision consisting of a conventional USB camera and a hyperbolic mirror.

The field localization system for agricultural vehicle indoor and outdoor environments consists of four artificial landmarks, an omnidirectional vision sensor, PC and operating vehicle (as shown in Fig. 1). The system sets four red artificial landmarks as a rectangle in the corners of an operating spot and estimates an absolute position relative to the landmark-based coordinate system on the ground. The principle of localization is that the omnidirectional vision sensor takes the image of the landmarks and estimates the directional angles of landmarks in the image. Camera location is estimated based on the directional angles. The system is not only a potential substitute for the GPS guidance system to localize agricultural vehicles, but it can also operate common computer vision functions to support localization and obstacle avoidance. Based on the analysis of system features, we know that agricultural vehicles equipped with the localization system will likely carry out navigation using their "eyes" in the same way that mammals move around in the world.

The recognition of landmarks and extraction of features is pivotal to realizing localization. In farm fields, the same crop usually shows a homologous color pattern, which makes it very difficult to utilize natural crop landmarks as features for processing images. Omnidirectional vision having a 360° view can capture landmark images in different directions. In order to ensure that images are captured in all directions and provide the same results, the landmarks were designed as a right circular red cone. Furthermore, to distinguish the landmarks from environmental interferences, we proposed a color model with red and blue patches.

One algorithm is about landmark tracking extraction in which red landmark pixels beyond the threshold were extracted as a small area and the center of gravity was calculated for the extracted small area representing the candidate of one landmark. Generally, providing the blue patch as compensation to further distinguish the landmark from other objects in a complex environment, blue patch pixels beyond the threshold were extracted as a small area and the center of gravity was calculated and judged the candidate of landmark by the distance between the two centers of gravity. Then the positions of four representative landmarks were obtained.

One image processing is about noise smoothness, which the classic low-pass filter (LPF) was employed to remove high spatial frequency noise from digital images. We multiplied convolution kernel elements by the least common multiple to compute the weighted sum and then divide the summation with the least common multiple to obtain the real results to improve computational speed.

The second algorithm is about estimation of the position of vehicle installed with camera. Based on the obtained positions of four landmarks via the landmark tracking extraction algorithm, and then estimated the four directional angles of the landmarks centered by camera principal point using only one omnidirectional image. Vehicle location was estimated using the center of gravity of the four intersections formed by four arcs according to geometric transformation based on the four directional angles of the landmarks. If only find three landmarks, we also utilized the directional angles to estimate the vehicle location.

In the test, if we used PC (Intel Core 2, 2.33GHz) to process a piece of image resolution 1024×768, it took only about 0.1~0.2 s. The tracking extraction, position estimation algorithms and image processing (LPF) are robustness.

In the localization algorithm, the principal point in the image is pivotal position and other calibration parameters are useful for improving the accuracy of locating. The calibration method utilized a 2D calibration pattern that can be freely moved. Without a priori knowledge of the motion, the boundary ellipse of the catadioptric image and field of view (FOV) were used to obtain principal point and focal length. Then, the explicit homography between the calibration pattern and its virtual image was used to initialize the extrinsic parameters. Last, the intrinsic and extrinsic parameters were refined by nonlinear optimization. Experimental results are proved to the calibration method which is feasible and effective. Localization application experimental results show that calibration can provide with the principal point value and improve the accuracy of localization about 1.6 cm in a 0.9×1.8-m area. The role of calibration is very obvious.

For the fast and accurate self-localization applying for agriculture, artificial landmarks can be used very efficiently in the natural environment. Based on the proposed artificial color landmark model, balancing distance between landmark and camera, landmark height and camera height to enlarge the application area was considered. We theoretically analyzed the necessary to balance camera height and landmark height to enlarge application area. Experimental results show that adjusting camera height and landmark height can enlarge application area for agricultural vehicle localization and we can decide the landmark size by the relations about landmark image size with distance between landmark and camera, landmark height and camera height, respectively.

In order to prove the localization system, we have done indoor experiments and outdoor experiments to verify the feasibility and effectiveness for indoor and outdoor field. The agricultural vehicle often operates on uneven ground and vibrates, camera tilt experiments also have done to test the errors caused by tilt angle. Indoor experiments were conducted under daylight lamps in a 5.8×3.53-m rectangular area of the laboratory, and outdoor experiments were conducted under natural sunlight in a 50×50-m square area to verify the system. Indoor experimental results showed that the maximum and mean position errors were less than 8 cm in an illuminated and small environment (as shown in Tab. 1). Outdoor experimental results (as shown in Tab. 2) showed that the maximum and mean position errors were about 46.96 and 31.99 cm, respectively; camera tilt experiments (as shown in Tab. 3) showed that the tilt angle had some effect on errors, but not to an obvious level, and it was not necessary to compensate for the errors caused by camera tilt. In conclusion, the system is feasible and a potential compensation or substitute for GPS in agricultural vehicle navigation required for indoor and outdoor environments for our purpose.

We introduced a new localization method on road for agricultural field utilizing omnidirectional camera with two landmarks. Image process extracted landmark candidate in the image and estimated the image distance between landmark and camera. The localization algorithm estimated the absolute location of vehicle based on the distance computational model between landmark image distance and spatial distance. Experimental results (as shown in Tab. 4) show that the mean distance error is about 20 cm on a 20 m distance road test. The proposed localization method is feasible and effective for agricultural vehicles field road navigation.

In a whole, we divide agricultural vehicles localization into two solutions to realize navigation: Field localization and Field road localization. This study mainly developed a localization system for agricultural vehicle in the indoor and outdoor field. We also developed a localization system for agricultural vehicle in the field road. Both of them use an omnidirectional vision sensor and artificial landmarks with simple construction and easy operation.

Fig. 1 System architecture

Tab. 1 Errors in x, y and distance (D) in indoor experiment

Tab. 2 Errors in x, y and distance (D) in outdoor experiment

Tab. 3 Errors in distance (D) relative to zero degree position with varying tilt angle

Tab. 4 Errors in x, y and distance (D) on 20m distance road experiment

審査要旨 要旨を表示する

本研究の目的はマシンビジョンによる農用車両の圃場内位置検出システムの開発である。低コスト農業のための無人走行車両では圃場内における位置検出がキーテクノロジーである。また有人車両であっても,作業跡の残りにくい牧草地などの農作業では走行中の車両の位置を把握しづらく,位置検出装置は有用である。また近年普及しつつある精密農法では,有人無人を問わず,車両の現在位置を自動的に測定し,生育マップの作成や,作業機の制御を行う必要がある。これらの目的のためGPSを使った位置検出法が一部で使用されているが,GPSはコストが高く,また周辺に山や木などの障害物がある場合に検出が不能になるという問題がある。更に,米国により運営されるGPSを今後無償で使用し続けられる保証は無い。これにより,より低コストで環境に左右されないシステムの開発が望まれている。本研究では,比較的安価に入手できる全方位視覚センサを用いて,圃場に設置したランドマークを撮影し,得られた画像から現在位置を算出する,新たな車両位置検出システムを構築した。全方位視覚センサは,双曲面形の反射鏡とデジタルカメラを組み合わせたもので,360度全方位の光景を一枚の画像として取得できる装置である。

論文は9章で構成されている。第1章では,農用車両の自律走行システムとそれに必要な圃場内の位置検出技術について,研究の意義と背景,実用化の現状,および既往の研究が紹介されている。自律走行システムは大別して,ナビゲーションセンサ,位置検出プログラム,経路計画,車両制御装置から構成される。これらの各技術について内外の研究がほぼ網羅されている。続く第2章では,現状の技術における問題点と,解決すべき課題が考察されている。

第3章では本研究で開発した位置計測法の原理と概要が述べられている。使用した全方位視覚センサは360度の視野をもち,全周の視覚情報が画像として記録される。これにより対象物の方向と距離が検出されるが,距離に比べて方向のデータが高い精度を持つ。本研究では既知の位置に設置した複数のランドマークを検出し,それらの方向からカメラの位置を計算する。

第4章では画像処理のアルゴリズムと,位置測定の具体的な計算プロセスが述べられている。圃場ではカメラとランドマークの距離が遠く,ランドマークの画像は微小である。また光環境の変化やノイズの存在により,場合によってはランドマークの検出は容易ではない。本研究ではノイズ除去の為の画像処理フィルタと色検出法,および一部のランドマーク画像が欠落した場合の対処法を工夫して,ロバスト性の高いアルゴリズムが構築されている。

第5章は視覚センサのキャリブレーションについて述べている。視覚センサは反射鏡とカメラで構成されるが,それら各々の歪と,組み合わせのずれによって,画像の方向データに微小な歪が生じる。広大な圃場においてはこの歪が無視できない誤差を生むため,キャリブレーションデータに基づく補正が重要である。本研究では簡単かつ高精度のキャリブレーション手法を開発し,実験でその効果を確認した。

第6章はカメラの高さとランドマークの大きさに関する記述である。これらは画像内のランドマークの大きさに影響する重要なパラメータであり,現実的かつ適切に設定する必要がある。ここでは計算および実験によって,適切な値を提案している。

第7章は位置検出の実験結果を記載している。50m四方の土地で行った実験での位置検出誤差はRMSで約34cmであった。また車両が傾斜することを考慮して,カメラの傾斜の影響を調べた。その結果,5度の傾斜によりRMSで約19cmの誤差が生じた。これは作業の種類によっては許容できる範囲であり,本手法が実際の農作業に適用される可能性を持つことが明らかとなった。

第8章ではランドマークの方向のみではなく距離の計測値を用いた場合の位置検出精度の実験例が紹介されている。距離データは方向のデータに比べて精度が低い。しかし両方のデータを用いることで,少数のランドマークでも位置検出が可能になる。

第9章は研究のまとめと,今後の課題について述べている。

以上のように,本研究は全方位視覚センサとランドマークによって,圃場内での位置検出が行えることを実証したものであり,高い独創性を持つ。開発された方法は現実的な広さの圃場において,自律走行車両の誘導や,精密農法における車両のナビゲーションに適用できる,極めて実用性の高い研究成果である。また,位置検出の精度を向上させるため,画像処理のプロセスやカメラのキャリブレーションにおいて,独創的な方法が用いられている。これらはマシンビジョンと画像処理の分野において,学術上貢献するところが少なくないと考えられる。よって審査委員一同は,本論文が博士(農学)の学位論文として価値あるものと認めた。

UTokyo Repositoryリンク