学位論文要旨



No 128677
著者(漢字) ルル ティモシー ギョム
著者(英字)
著者(カナ) ルル ティモシー ギョム
標題(和) シーケンシャル記憶の神経動力学
標題(洋) Neurodynamics of Sequential Memory
報告番号 128677
報告番号 甲28677
学位授与日 2012.09.27
学位種別 課程博士
学位種類 博士(工学)
学位記番号 博工第7851号
研究科 工学系研究科
専攻 電気系工学専攻
論文審査委員 主査: 東京大学 教授 合原,一幸
 東京大学 教授 堀,洋一
 東京大学 教授 廣瀬,明
 東京大学 准教授 松浦,幹太
 東京大学 准教授 河野,崇
 東京大学 准教授 鈴木,秀幸
内容要旨 要旨を表示する

Sequential memory has been the subject of intense research in recent years from the viewpoint of computational neuroscience. Experimental recordings of neural activity have been conducted at many different scales (EEG, local field potentials, multi-unit recordings etc.), notably with novel techniques that allow for the simultaneous analysis of large neuron populations. The identification of sequential patterns of activity remains however an intricate problem because it is difficult to distinguish between the random and deterministic contributions to neural fluctuations. Indeed, the code used by the brain to store, generate and manipulate sequential memories is not fully understood and, therefore, the means by which precise information could be extracted from neural signals are yet to be discovered.

In order to solve this problem, the most critical aspect to investigate is the origin and mechanisms of reliable generation, transmission and storage of temporal patterns of activity. The simulations and models of neural networks that we have developed have enabled us to identify some previously unknown conditions and dynamical regimes under which generation of temporal patterns can be achieved with robustness and flexibility as a result of well-known biological mechanisms operating in the brain. Our analysis shows that the difficulty in observing reproducible temporal patterns of activity may result from theoretical obstacles in addition to experimental ones. The models we have developed allow for the identification novel aspects of neural activity which are likely to encode sequential information, which may help improving the experimental protocols used for the analysis of neural activity in vivo, on which rely future discoveries.

Thesis

The fundamental hypothesis that motivated our research is that the encoding of sequential memory can be achieved by non linear dynamics of population averaged quantities. Previous studies have shown that low dimensional chaos has computational properties which are particularly adapted to the processing of spatio-temporal information in neural systems. However, it has been difficult to identify clearly chaotic patterns in vivo conditions, notably because neural networks are high-dimensional systems subject to fluctuations resulting from a large number of degrees of freedom. We demonstrate that at the level of neuron populations described by ensemble averaged quantities, for which the number of degrees of freedom is smaller, non linear dynamics can contribute to the encoding of temporal and sequential information.

Chaotic attractors and learning

First, we have studied how chaotic attractors encoding temporal information can be created by realistic learning mechanisms. We develop a mean field approximation of an analog neural network model to analyze the bifurcations, induced by the slow effect of learning on synaptic weights, which lead to the creation of novel chaotic attractors of activity. Attractors encoding persistent activity can notably appear via generalized period-doubling bifurcations, tangent bifurcations of the second iterates or boundary crises.

We consider the combined effects of LTP/LTD and synaptic scaling in the stabilization of these chaotic attractors. According to the rate of change of the external inputs, different types of attractors can be formed: line attractors for rapidly changing external inputs and discrete attractors for constant external inputs. Moreover, we found that fractal basin boundaries may form in neural systems when non-trivial attractors coexist.

Deterministic irregularity

We propose that the occurrence of these fractal basin boundaries have important consequences, notably concerning the change in irregularity and trial-to-trial variability of neural recordings. We evaluated the difference in complexity between coexisting attractors by calculating their Lyaponuv number and found that changes in deterministic dynamics may explain the change in irregularity observed in vivo during tasks involving the working memory.

When the encoding attractors are chaotic and basin boundaries are fractal, infinitesimal differences in the initial conditions induce sensitivity to initial conditions and final-state sensitivity, respectively, in response of the brain. We have shown that final state sensitivity can participate to the non-reproducibility characterizing physiological recordings.

We related this concept of non-reproducibility to the matching law and the hypothesis of Bayesien computation by considering the modulation of learning due to LTP and reward-dependent learning via dopamine. We show that, by modulating the area of fractal basin tongues, modulation by dopamine can favor or suppress final state sensitivity and thus the amount of trial-to-trial variability. We argue that final state sensitivity can be a candidate for the neural implementation of Bayesian computation.

Population spike coding

Furthermore, we developed a realistic model to demonstrate that population averaged activity can indeed exhibit chaotic dynamics and encode for temporal information in vivo. We developed a large scale simulation of a neural population and carefully implemented many realistic biological features and mechanisms which constrain the dynamics of biological networks, with a precise objective in mind: to not simplify artificially random fluctuations in our model as compared to in vivo conditions. Our purpose was indeed to show that, in spite of random fluctuations which are inherent to biological neural networks, some deterministic contribution can nonetheless encode temporal information.

The model is based on a network of leaky integrate and fire (LIF) neurons exposed to a noisy background, slow synaptic currents at different time scales (slow NMDA, and relatively fast AMPA and GABA currents), dynamic synapses with heterogeneous properties (depression and facilitation-dominant), delays in synaptic transmissions and slow oscillations from the cortical background modeling the modulating effect of sub threshold oscillations. All these phenomena were simulated with realistic parameters taken from the literature in neurophysiology.

We identified that the slow coordinated patterns of up and down states observed in the cortex can encode temporal information in their inter population spike intervals. We suggest that these patterns could result from the interactions between the synaptic dynamics of local neuron populations and slow modulating oscillations. We show that coupled excitatory and inhibitory networks can exhibit relaxation oscillations driven by slow oscillations, which can induce synchronization and chaotic dynamics of the population averaged activity. Implications of our model are consistent with the observation that slow oscillations are related to up and down state patterns recorded in vivo, correlated with the state of arousal during behavioral tasks and involved in memory replay.

The simulations were supported by a detailed theoretical analysis based on recent progress made in the theory of mean field approximations applied to leaky integrate and fire models. Due to our efforts in simulating a realistic network, its theoretical mean field approximation includes a large number of variables to be taken into account. We have proposed a method to simplify the analysis of the system by a fast-slow reduction. Our approach consists in aggregating the variables representing rapid membrane dynamics with the fastest components of dynamic synapses. Only the slowest contributions to short term plasticity are studied as a separate slow system.

The reduction of the fast-slow system relies on the computation of the critical manifold, i.e., steady state manifold of the fast system. We extended some results in the theory of mean field approximation applied to LIF neurons and achieved a bifurcation analysis to find conditions under which relaxation oscillations occur on the critical manifold. The analysis that we conducted is likely to be useful for the study of other systems and has a theoretical interest of its own.

Future directions

Our hypothesis that temporal information can be encoded in the imprecise timing of population spikes generated by the intrinsic properties of local neuron population contrasts with the idea of synchronous firing chains which existence is controversial.

This novel paradigm requires further theoretical investigation to test the temporal precision and limiting capacity of this code. Testing this hypothesis could suggest novel in vivo experiments.

The effect of noise on these relaxation oscillations is to haste or slow down the transition delay between up and down state, when the trajectory is at the proximity of a fold curve. It would be interesting to study in more details this phenomenon which is characteristic of noisy systems operating near a fold bifurcation. The change in spike synchrony at the transitions between up and down state observed in vivo recordings may be an important argument to support the hypothesis that these state transitions result from a deterministic contribution in addition to a stochastic one.

審査要旨 要旨を表示する

近年、シーケンシャル記憶は、計算論的神経科学、神経生理学の両面から非常に重要な研究テーマとなってきている。計測技術の進歩により様々な時間的・空間的スケールで豊富な神経活動データが得られるようになっているが、シーケンシャル情報の認識および生成に関する機構にはいまだ不明な点が多い。これは神経ネットワークの動力学特性と脳内の情報符号化の関係が解明されていないことに起因する。本論文は、神経ネットワークモデルの計算機シミュレーションおよび理論解析により、神経活動における時間的パターンの生成、伝達、保持の機構の解明に取り組んだものである。特に神経生理学的によく知られているロバスト性と柔軟性を持つ時間的パターンの生成に必要な動力学的性質上の条件を明らかにした。また神経生理実験における神経活動パターンの記録の非再現性が、実験技術の面だけではなく理論研究から示唆される神経ネットワークの動力学構造にも起因し得る可能性を示した。本論文は、神経ネットワークの動力学特性およびシーケンシャル記憶の符号化に関して新しい知見を与え、今後の神経生理実験データの解析や実験計画に対して大きな影響を与えることが期待される。

本論文は、「Neurodynamics of Sequential Memory」(シーケンシャル記憶の神経動力学)と題し、5章より成る。

第1章「Background」(背景)では、神経ネットワークの動力学とシーケンシャル記憶に関して、先行研究を紹介し、本研究の位置づけを明確にしている。

第2章「Chaotic attractors and learning」(カオスアトラクタと学習)では、シーケンシャル情報を符号化するカオスアトラクタが、どのように現実的な学習機構により獲得されるかを研究している。アナログ神経ネットワークモデルに対応する平均場モデルを導出し、分岐解析の手法を用いてカオスアトラクタの新しい発生機構を調べることで、継続的な活動状態に対応するアトラクタが周期倍分岐、接線分岐などを介して生成されることを明らかにした。さらに長期的シナプス増強・抑圧の効果をモデルに取り込み、外部入力とシナプス可塑性により神経ネットワークにアトラクタが形成されることを示した。特に、実際の神経ネットワークに存在するようなシナプス可塑性により、カオスアトラクタが形成されることを示したことが新しい成果である。

第3章「Deterministic irregularity」(決定論的不規則性)では、神経生理実験における神経活動の記録において、試行間の神経応答の分散や不規則性が、フラクタルベイスン境界を持つ力学構造に起因し得ることを提案している。この研究では共存する複数のアトラクタにおいてリアプノフ指数を用いてその複雑性を評価することで、決定論的ダイナミクスが実験的に観測される不規則性を説明できることを示している。カオスアトラクタの初期値鋭敏性だけではなく、ベイスン構造のフラクタル性が微少な初期状態の変化を拡大させる終状態鋭敏性により、神経生理実験における非再現性を生じ得る可能性を示した。さらにシナプス伝達効率の変化をモデル化し、ドーパミンやアセチルコリンなどの神経伝達物質により終状態鋭敏性が変化し、結果として試行間の分散が変化することを示した。この研究では、終状態鋭敏性が脳内の新しい計算原理の基盤になる可能性を示しており、その計算論的意義は大きい。

第4章「Population spike coding」(集団スパイク符号)では、より現実に近い生理学パラメータに基づく神経モデルを基に大規模神経ネットワークを構築し、カオス動力学とシーケンシャル情報の符号化に関して研究を行っている。リーク付き積分発火型神経モデルをベースに、神経活動のゆらぎ・乱雑さを表すノイズ、様々な時間スケールのシナプス、動的シナプス等の性質を考慮して、多数の興奮性および抑制性の神経細胞からなるネットワークモデルを構築した。このモデルは、皮質上で観測されるアップ・ダウン状態の間の状態遷移を再現する。この状態遷移にともない生じる遅い振動は、シーケンシャル情報の符号化に寄与すると考えられる集団スパイクを発生させる。さらに提案モデルに対応する平均場モデルを導出し、遅い変数・速い変数の違いに着目した安定性解析により、アップ・ダウンの状態遷移の発生機構を詳しく解析している。この研究の結果は神経生理学に大きな示唆を与えるだけでなく、提案している解析手法は他の様々な動力学システムに対しても応用できる可能性がある。

第5章「Summary」(要約)では、本論文の結論をまとめるとともに、今後の発展の可能性について議論している。

以上を要するに、本論文はシーケンシャル記憶の理論的研究に関して新規性の高い神経ネットワークモデルを提案し、神経生理実験結果を説明するとともに、その動力学特性を詳しく解明した。この成果は、計算論的神経科学および電気系工学上貢献するところが大きい。

よって本論文は博士(工学)の学位請求論文として合格と認められる。

UTokyo Repositoryリンク