学位論文要旨



No 129085
著者(漢字) 宋,泳恩
著者(英字)
著者(カナ) ソン,ヨンウン
標題(和) 遠隔協働をめざした,多様認知知覚を有するジェスチャーに基づくヒューマンロボットインタフェースに関する研究
標題(洋) Study on Gesture based Human-Robot Interface with Multimodal Cognitive Perception for Remote Collaboration
報告番号 129085
報告番号 甲29085
学位授与日 2013.03.25
学位種別 課程博士
学位種類 博士(工学)
学位記番号 博工第7976号
研究科 工学系研究科
専攻 電気系工学専攻
論文審査委員 主査: 東京大学 教授 久保田,孝
 東京大学 准教授 古関,隆章
 東京大学 准教授 小川,剛史
 東京大学 准教授 大石,岳史
 北海道大学 教授 杉本,雅則
 筑波大学 准教授 望山,洋
内容要旨 要旨を表示する

This study proposes an intuitive teleoperation scheme by using human gesture in conjunction with multimodal human-robot interface. Recently, the teleoperation systems carry out a decisive role where the environments are dangerous, unstructured and under-recognized; such as bomb disposal, rescue or space exploration with the abilities of robot that are precise, and mechanically strong. However, conventionally, researchers have used joystick like control sticks or keypads. But, with such method, the more the task is getting complicated, the more workload is increasing in geometric progression, and also the operators have to be trained enough prior to the session. To solve these problems, the gesture capture is applied as a input system to control robots by moving simultaneously with operator's own motions. The early studies captured only the hands of operators to control the end-effectors of the robots by using marker based optical systems or exoskeleton motion capture devices. However these kinds of systems often disturb the natural movements of operators and also had to be installed beforehand. It brings to a conclusion to stress the importance of having portable device which can capture the whole body gesture to sustain the natural movement of the operator. It also means more natural and intuitive teleoperation system for the remote collaboration task is possible with such device. To support better perception of remote environments, this study apply the multimodal human-robot interface which consists of immersive 3D visual feedback and vibrotactile feedback. In the case of the immersive 3D visual feedback, operators are surrounded with 3D virtual reality (VR), which makes it seems as if they are inside of a different place. Therefore, It is possible for the operators to feel the remote places as close to the real world when they interact with objects or other people in a certain virtual world through their own avatars, and that allows complicated collaboration tasks between them possible. But there is a crucial factor of the immersive 3D VR. In order to reflect the operator in the virtual world as the avatar, continuous capturing of the operator's pose is necessary. For such activity, application of emph{the intelligent space (iSpace)} concept would be ideal. iSpace concept had been proposed by Hashimoto laboratory at the University of Tokyo since 1996. The iSpace makes surrounding space to have intelligence by using emph{distributed intelligent network device (DIND)}, and the DIND observes all events including human movements in the space. However the conventional iSpace necessitate operators to not deviate from the sensor range, since DINDs are fixed in specific places it makes nearly impossible to transfer the device around. To overcome these space restraints and to make iSpace system more flexible, a new type of DIND is required. As a solution, this paper newly presents the mobile iSpace, a personal portable device which is also able to provide the multimodal feedback. Further, in order to deal with the complication of dynamic daily environment, the haptic point cloud rendering and the virtual collaboration are applied to the mobile iSpace. Therefore, firstly, a surrounding environment of a teleoperated robot is captured and reconstructed as the 3D point cloud using a depth camera. Virtual world is then generated from the 3D point cloud, which a virtual teleoperated robot model is placed in. Operators use their own whole-body gesture to teleoperate the humanoid robot. The Gesture is captured in real time using the depth camera that was placed on operator side. The operator receives both the visual and the vibrotactile feedback at the same time by using a head mounted display and a vibrotactile glove. To enhance the perception of remote environments, the sound-based vibrotactile feedback rendering scheme is newly presented to make operators are able to recognize the textures naturally. The system renders the vibrotactile feedback based on the sound which is generated from the actual touch between the human hand and certain objects. Then the vibrotactile feedback is rendered based on the human gesture movements, e.g. direction or velocity of hand, to make a possible to present the cognitive haptic illusion to the operator. Moreover, the rendered multimodal feedback parameters are evaluated and used to figure out the human stimulus model via the psychophysical study, and the feedback parameters are regenerated based on the human stimulus model. With the cognitive haptic illusion and psychophysical study, this system is able to overcome the lack of reality of the portable device. All these system components, the human operator, the teleoperated robot and the feedback devices, are connected with the Internet-based virtual collaboration system for a flexible accessibility. This study showcases the effectiveness of the proposed scheme with experiments that were done to show how the operators can access the remotely placed robot in anytime and place.

審査要旨 要旨を表示する

本論文は「Study on Gesture based Human-Robot Interface with Multimodal Cognitive Perception for Remote Collaboration(遠隔協働をめざした,多様認知知覚を有するジェスチャーに基づくヒューマンロボットインタフェースに関する研究)」と題し,遠隔地において複数の人間が協働作業を行うことを支援するポータブルなヒューマンロボットインタフェース(HRI)の構築をめざして,画像情報と触覚情報を複合し,没入感を有するシステムについて研究したものである。特に,仮想空間に作業環境モデルとジェスチャーに基づいたロボットの動作を実現し,多様認知知覚により没入感を提示し,各操作者がその空間を共有することによる遠隔操作システムを提案し,その有効性を実験により研究したもので,6章からなる。

第1章は序論として,複数の人間が遠隔地で協働作業を行う際の知的なHRIを念頭に,操作しやすい臨場感を有するポータブルなシステムの重要性を指摘し,将来のテレオペレーションに関する要求と課題を説明し,本研究の目的と基本的考え方をまとめている。

第2章では,テレオペレーションに関する従来の研究を紹介し,人工現実感技術や遠隔操作技術について,現状の問題点を説明している。

第3章では, HRIシステムにおいて,没入感を持たせるために,画像情報と触覚情報を複合した多様認知知覚手法を考案している。また。音と振動の関係に着目し,物体の材質を認知する手法を提案している。

第4章では,簡易なデバイスを用いたHRIとして,Mobile iSpaceコンセプトを提案し,距離画像カメラによる人間の行動認知手法,空間メモリを用いた環境モデルの構築,環境提示システム,振動モータを用いたグローブを考案している。また,Mobile iSpaceと協働作業可能な仮想現実感モデルを結合することにより,多様認知知覚を有するポータブルなHRIシステムを構築している。

第5章では,統合し構築したHRIシステムを用いて,遠隔作業実験を行い,提案手法の有効性を検討している。

そして,第6章では結論としての総括と本研究の貢献を記述している。

以上要するに,本論文は,人間に親和的なロボットインタフェースをめざして,画像情報と触覚情報に基づく臨場感と没入感に着目し,協働作業可能な仮想現実モデルとポータブルなMobile iSpaceシステムを新規に提案し,多様認知知覚を有する統合システムの構築により,その有効性を示したもので,ロボット工学,電気工学への貢献が少なくない。

よって本論文は博士(工学)の学位請求論文として合格と認められる。

UTokyo Repositoryリンク