学位論文要旨



No 126222
著者(漢字) ニコラス
著者(英字)
著者(カナ) ニコラス
標題(和) 術中画像誘導手術用のリアルタイムIntegral Videography立体像表示システムの開発
標題(洋) Development of Real-time Integral Videography Autostereoscopic Visualization System for Intra-operative Image Guided Surgery
報告番号 126222
報告番号 甲26222
学位授与日 2010.03.24
学位種別 課程博士
学位種類 博士(情報理工学)
学位記番号 博情第289号
研究科 情報理工学系研究科
専攻 知能機械情報学専攻
論文審査委員 主査: 東京大学 教授 土肥,健純
 東京大学 教授 廣瀬,通孝
 東京大学 教授 佐久間,一郎
 東京大学 准教授 正宗,賢
 東京大学 准教授 小林,英津子
内容要旨 要旨を表示する

Introduction

Medical imaging technologies have reached the level that it is relatively simple to acquire 3D images of human body. So it comes to visualization methods to take advantages of those 3D images. 3D images are commonly visualized as 3D images on 2D screen, that does not provide enough depth perception especially for image guided surgery. Stereoscopic image is a better option to provide visualization with superior depth perception during surgery. We have developed stereoscopic visualization method using the principle of Integral Videography (IV). IV is a stereoscopic visualization method that uses the combination of high-resolution LCD display and micro convex lens array to project light rays onto 3D space. IV is spatially accurate, does not need glasses or any wearable devices, and it has automatic motion parallax. IV has been applied to surgeries with pre-operative image navigation such as brain and knee surgeries, but it has not been applied yet to surgeries with intra-operative image navigation. The reason is that current system lacks speed, image quality, and user interface.

In this study, we address current limitation of IV technology, and develop a stereoscopic image visualization system for surgery navigation systems with intra-operative image acquisition. In details, we develop a fast IV rendering method for real-time visualization of intra-operative images, a high image quality IV rendering algorithm, and a software design that enable easy implementation of user interface to build various surgery navigation applications upon. We target surgeries with large organ movement and deformation, such as heart and fetus surgery navigation system. Both surgeries should benefit the most from the use of intra-operative images.

Methods

Real-time IV Rendering

To realize real-time stereoscopic visualization of 3D medical images, we developed fast IV rendering method using GPU acceleration without pre-calculation. We developed direct IV rendering method to visualize 3D voxel data. IV rendering method is based on ray tracing algorithm. The ray tracing process consists of same calculation against multiple data, and therefore can be processed in parallel by multiple processors. IV rendering was implemented using GPU computing on CUDA platform. We developed a method to utilize texture memory in GPU for parallel processing of IV rendering.

IV Rendering Algorithm

To improve image quality of IV rendering, we developed IV rendering algorithms using composite rendering method with color and alpha transfer function, and Phong shading for depth perception enhancement. Composite rendering is the idea to use color and alpha transfer function to assign opacity to every voxel in the 3D volume data, and add up contribution of each voxel based on its opacity level during ray tracing. Phong shading was applied not only to surfaces, but also to uniform area inside the volume. Gradient vector of image intensity, instead of surface normal vector, was used to define the direction of reflection in Phong shading.

Software Design

We employed modular design for IV image visualization system in 3D Slicer for easy implementation of various applications. Combination of IV rendering module and OpenIGTLink module allowed realization of various applications. OpenIGTLink acted as socket server to receive 3D image data and 4x4 matrix data from other software, and passed them to IV rendering module to visualize.

3D Ultrasound Real-time Visualization

Targeting surgery navigation systems for heart and fetus surgeries, we developed a real-time IV image visualization system using 3D ultrasound (Fig.1). 3D ultrasound images acquired by ultrasound device were visualized on-the-fly as IV stereoscopic images. We implemented the system on two ultrasound imaging systems of different connectivity.

In the first system, visualization system of 3D ultrasound data from 4-parallel probe (built on ALOKA α10), connectivity between ultrasound device and PC workstation was using a shared memory. Data acquisition was performed by separate software that sent 3D data to 3D Slicer through OpenIGTLink. 3D ultrasound data was transformed from cylindrical to rectangular coordinate system before being sent to 3D Slicer. Coordinate transformation was also implemented with GPU calculation on CUDA platform.

In the second system, visualization system of 3D ultrasound data from 8-parallel probe, connectivity between ultrasound device and PC workstation was using USB data transfer cable. A shared folder on PC workstation was used to store 3D ultrasound data. In the implementation, the software on PC workstation that check the newest data was built as a module of 3D Slicer, so that no communication through OpenIGTLink is required.

Results

Real-time IV Rendering Evaluation

To evaluate the speed of IV rendering methods we developed, we compared rendering speed of GPU (NVIDIA GeForce 8800 GTX and NVIDIA Quadro FX5800) to that of CPU (Intel Core i7 2.66 GHz). Evaluations were performed for various data sizes of 643, 1283, 2563, 5123 voxels and composite rendering algorithm. We used 6.4 inches XGA display in this evaluation. IV image size was 1024 x 768 pixels. For each data size, GPU calculation were 9, 17, 37, 61 times faster using first generation GPU, and 16, 33, 82, 189 times faster using second generation GPU.

IV Rendering Algorithms Comparison

We compared IV image quality of the new IV rendering algorithm (composite rendering with Phong shading) with different parameters. A 3D pattern consists of vertical, horizontal and oblique tubes were visualized as IV images at various distances from display and various Phong shading parameters (Fig.2). Ten users participated in a questionnaire-based evaluation, and as the results, Phong shading improved depth perception for objects inside the viewing range of IV display (Fig.3).

Then we tried visualization on various human heart datasets: CT (single slice, ECG gating, 5122 pixels × 192 slices), MRI (0.2T, heart pulse gating, no respiratory gating, 2562 pixels × 19 slices), and ultrasound (mechanical 3D convex probe, 304 × 248 pixels × 44 slices) datasets with various shading parameters (Fig.4). In case of low noise CT-dataset, shading was smooth, and the more specular component put on, the better the depth perception. In case of rather noisy datasets of MRI and ultrasound data, too much weighing on specular component may result in decrease of overall image brightness and contrast.

Real-time 3D Ultrasound Visualization System Evaluation

After some phantom experiments, in order to demonstrate the usefulness of IV surgery navigation system, we performed an in-vivo porcine (male, 47.5 kg) experiment simulating a mitral valve surgery on beating heart navigated by IV images of 3D ultrasound (Fig.5(a)).

The surgery was conducted by an expert cardiologist. We tested IV system to visualize mitral valve movement in real-time. 3D ultrasound data (4-parallel) acquisition was performed for several combinations of resolution and data acquisition rate, up to 8 volumes/s. For all cases, 3D ultrasound data is transformed into 2563 voxels data, which was rendered at around 26 fps. There was no frame skipping and time lag was less than 1 frame. Then we guided a surgical tool towards mitral valve under 3D ultrasound guidance displayed as IV stereoscopic images (Fig.5(b)). According to the cardiologist conducting the surgery, time lag was within tolerable range.

Then we tried our system to visualize clinical datasets of 15-35 months old fetus acquired with 8-parallel 3D ultrasound probe. Color and alpha transfer functions were manually adjusted, as well as rectangular cuboid cutting planes.

Comparison between IV and 3D visualization

To evaluate the benefit of IV visualization for surgery navigation, we performed an experiment simulating a targeting maneuver navigated by IV image and 3D visualization. Six users were ordered to navigate one end of a rod to the center of the target (a donut shaped rubber with outer and inner diameter of 30mm and 10mm respectively) under guidance of real-time 3D ultrasound, and no direct vision. The time required to navigate was compared between both visualization methods. The time until the rod touched the target for the first time, the completion time since the first touch, and total time was evaluated. As the results, by using IV visualization the time required for first touch was slower by 82 %, but time for detailed targeting was faster by 50%, and total time was faster by 8%.

Discussions

Real-time IV Rendering Implementation using GPU calculation

The implementation of IV rendering algorithms on GPU has sped up the rendering process significantly compared to existing CPU approach. IV rendering speed linearly proportional to data size and the number of pixel rendered. Also, comparison between two generations of CUDA-enabled GPU showed that the trend of GPU calculation power increase exceeds that of CPU described by Moore's law.

High Image-quality Rendering Algorithms

Composite rendering (with color and alpha transfer function and Phong shading) improved image quality that it added better color representation, transparency capability, and depth perception. Proposed algorithm has more calculation complexity than the original iso-surface rendering algorithm, and therefore is about 20% slower than original iso-surface rendering. However, both algorithms have calculation complexity of O(mnx) when rendering x3 voxels dataset onto m×n pixels image.

User Interface

The user interface of IV image visualization system that was built as a module of 3D Slicer allowed easy implementations of various applications. In this study, we showed that by developing real-time 3D ultrasound IV visualization system using two different ultrasound imaging systems. By design, our system can be used for wider range of applications, but in real world setting, it is better to further simplify (to cut-off unused functions) and automate (to create a workflow triggered by seeral sequential clicks) IV rendering module for each application case-by-case.

Real-time 3D Ultrasound IV Visualization System

In this study, we built two applications of real-time 3D ultrasound visualization system for systems using 4-parallel and 8-parallel 3D ultrasound probes. The first system (4-parallel probe) that used shared memory in synchronized connectivity provided much better data transfer rate than the second system (8-parallel probe) that used shared folder in unsynchronized connectivity. In the second system, theoretical maximum data transfer rate is the speed of USB bus (around 480 Mb/s = 60 MB/s). Theoretically, it should be able to transfer 3D ultrasound data that has the size of about 5-6 MB at around 10 volumes/s, but overall data transfer was only around 1 volume/s because of an access conflict when there were attempted read and write access at the same time.

Conclusion

We have developed a real-time auto-stereoscopic visualization system for surgery navigation that utilize intra-operative image. The stereoscopic visualization method is able to visualize 3D medical images in real-time without any pre-processing. Evaluation of IV rendering speed showed that GPU calculation outperformed CPU calculation by up to 189 times. Evaluation of GPU calculation across GPU generation showed the trend that GPU calculation improves faster than Moore's law.

We have also developed a composite IV rendering algorithm using color and alpha transfer function and Phong shading implementation. Evaluation on the shading algorithm showed our proposed method has improved reality and depth perception for objects inside the viewing distance of IV display.

We developed IV visualization system with optimized software design that allows easy implementation of various applications. In this study, we applied our system for real-time stereoscopic visualization of 3D ultrasound for two different devices. Feasibility experiment of IV for surgery navigation showed that the use of IV shortened overall targeting time by 8% and detailed targeting time by 50%.

Fig 1. 3D ultrasound IV visualization system configuration

Fig.2 IV images of the pattern at various distance and with various shading parameter. From left to right: -80mm, -40mm, 0mm, 40mm 80mm. From top to bottom: A:D:S = 1:0:0, 0.16:0.84:0, 0.08:0.75:0.17, 0.08:0.5:0.42.

Fig.3 Comparison of depth perception between various shading parameters. Scoring system from 1(worst) to 5(best) was employed.

Fig.4 IV stereoscopic visualization of various datasets using different shading parameters. top to bottom: CT data, MRI data of human heart, ultrasound data of porcine heart (mitral valve only). Left to right: various Phong shading parameter, A:D:S = 1:0:0(original algorithm), 0.16:0.84:0, 0.08:0.75:0.17, 0.08:0.5:0.42.

Fig.5 In vivo porcine experiment: (a) Real-time IV stereoscopic image from intra-operative 3D ultrasound as image guidance. (b) simulating mitral valve surgery on beating heart, surgical tool was driven towards mitral valve under IV stereoscopic image guidance.

審査要旨 要旨を表示する

本論文は3次元術中超音波を画像誘導治療に応用する場合に重要な技術となる3次元情報提示方法の高速化を,Integral Videography(IV)を対象として行ったものである.具体的には,術中画像を視覚化するための高速IVレンダリング方法の開発,高画質なIVレンダリングアルゴリズムの開発,様々な手術誘導アプリケーションを開発できるユーザインターフェースの環境の開発を行った.その結果,3次元超音波画像装置からの情報をリアルタイムでIV表示できる世界で唯一のシステムを実現したのは極めて大きく,心臓外科手術や胎児外科手術のような術中の臓器の移動・変形が大きい手術への適用が可能となった.

本論分は6章からなり,第1章では,序論として3次元実画像表示技術に関してIntegral Photographyとそれをベースにしたフルカラー動画像表示のIVについてその歴史と原理について解説し,本画像表示が治療分野において大変有用であることを述べている.第2章では,本研究の目的として,(1)リアルタイムIVレンダリングのアルゴリズムの開発,(2)高画質レンダリングのアルゴリズムの開発,および(3)臨床応用に必要なユーザインターフェースの開発,の3点を挙げている.第3章では,システム構成,ソフトウエア開発などの方法について述べている.特にGPUを使用することにより,超音波画像装置,X線CT,MRIなどの3次元画像データをリアルタイムでIV用のデータに演算するアルゴリズムの開発に成功している.開発したシステムを用いて,3次元超音波画像装置からのデータをリアルタイムでIV用データに変換して表示に成功している.第4章では,開発したシステムに関して,事前に取得したヒトの心臓データと胎児モデルを用いてシステムの評価を行っている.第5章では,実験結果に対して考察し,第6章で結論を述べている.

まず、三次元医用画像のリアルタイム立体像の視覚化を目的に,GPU計算を用いて三次元ボクセルデータを直接レンダリングする高速IVボリュームレンダリング方法を開発した。また、実三次元で重要な奥行き感を向上させるために、色・透明度のカラーマッピング及びPhongシェーディングを用いるCompositeレンダリング法を開発している。IVレンダリング機能を3D Slicerの拡張モジュールとして開発した.IVレンダリングモジュールをOpenIGTLinkモジュールと組み合わせることで,様々なアプリケーションの実現が可能となった。これにより、心臓外科手術や胎児外科手術の臨床応用を目的に、市販されている三次元超音波画像診断装置によりリアルタイムで取得した情報をオンラインでリアルタイムIV画像として視覚化するシステムの開発に成功している.

本研究で開発したGPUによるIVレンダリング手法と、通常のCPUによるIVレンダリング時間の処理速度を比較した結果、データサイズが大きくなるほど開発したシステムのほうが高速であることが確認されている。また、奥行き感に関して、従来方法と本研究で開発したIVレンダリングアルゴリズム(Phongシェーディングを用いたCompositeレンダリングアルゴリズム)との比較を行った結果,IVディスプレイの最長飛び出し距離の範囲内ではPhongシェーディングによって奥行き感が向上したことが確認されている。

また、本IVレンダリング表示における元データのノイズの影響を見るために、画質の良いCT像、次にMR像、最後に超音波画像による人の心臓の三次元表示を行った。その結果、比較的ノイズが少ないCT画像の場合は,シェーディングは滑らかであり,鏡面反射成分の割合を大きくすればするほど奥行き感が向上され、比較的ノイズが多いMR画像や超音波画像の場合は,鏡面反射成分を大きくしすぎるとIV画像の明るさ及びコントラストが落ちることを確認している.IVのリアルタイム性の評価としては、ファントム実験とブタによる動物実験を心臓外科医に評価してもらったところ、十分に満足行くもので、手術の遂行上問題ないことも確認している。最後に、ナビゲーションの性能評価としてドーナツ形のターゲット(内径10mm,外形30mm)の穴に鉗子を挿入する時間の計測を、本IV表示と疑似三次元表示で表示して比較している.ターゲットまでの時間は、広範囲に一様な画質で表示な疑似三次元表示の方が速い(82%)が、挿入には本IV表示のほうが速く(50%)かつトータル時間としても本IV表示のほうが速い(8%)という結果を得ている。

以上のことから、術中画像用のリアルタイムIV立体像表示として、高速なIVレンダリング手法を開発し,前処理なしで三次元医用画像をリアルタイムで視覚化が可能になっている.また、IVレンダリングの速度評価から,開発した手法は従来方法より最大189倍速いことも判明している.一方、高画質なIVレンダリングアルゴリズムの開発により,色・透明度な表示が可能になり,IVディスプレイの最長飛び出し距離内では提案したアルゴリズムが従来アルゴリズムより奥行き感を向上している.そして、開発したシステムをモジュール型のデザインにすることで、様々なアプリケーションに応用しやすく,本研究では2つの異なったアプリケーションで示している.三次元超音波を用いたリアルタイムIV表示システムを構築し、従来の三次元表示よりターゲティングを8%で,詳細のターゲティングを50%の時間短縮が得られている.

よって本論文は博士(情報理工学)の学位請求論文として合格と認められる.

UTokyo Repositoryリンク