Form versus function[electronic reso...
Petrovici, Mihai Alexandru.

 

  • Form versus function[electronic resource] :theory and models for neuronal substrates /
  • 紀錄類型: 書目-電子資源 : Monograph/item
    杜威分類號: 612.8233
    書名/作者: Form versus function : theory and models for neuronal substrates // by Mihai Alexandru Petrovici.
    作者: Petrovici, Mihai Alexandru.
    出版者: Cham : : Springer International Publishing :, 2016.
    面頁冊數: xxvi, 374 p. : : ill. (some col.), digital ;; 24 cm.
    Contained By: Springer eBooks
    標題: Neurobiology.
    標題: Neurosciences.
    標題: Simulation and Modeling.
    標題: Computational neuroscience.
    標題: Neural circuitry.
    標題: Physics.
    標題: Numerical and Computational Physics.
    標題: Mathematical Models of Cognitive Processes and Neural Networks.
    ISBN: 9783319395524
    ISBN: 9783319395517
    內容註: Prologue -- Introduction: From Biological Experiments to Mathematical Models -- Artificial Brains: Simulation and Emulation of Neural Networks -- Dynamics and Statistics of Poisson-Driven LIF Neurons -- Cortical Models on Neuromorphic Hardware -- Probabilistic Inference in Neural Networks -- Epilogue.
    摘要、提要註: This thesis addresses one of the most fundamental challenges for modern science: how can the brain as a network of neurons process information, how can it create and store internal models of our world, and how can it infer conclusions from ambiguous data? The author addresses these questions with the rigorous language of mathematics and theoretical physics, an approach that requires a high degree of abstraction to transfer results of wet lab biology to formal models. The thesis starts with an in-depth description of the state-of-the-art in theoretical neuroscience, which it subsequently uses as a basis to develop several new and original ideas. Throughout the text, the author connects the form and function of neuronal networks. This is done in order to achieve functional performance of biological brains by transferring their form to synthetic electronics substrates, an approach referred to as neuromorphic computing. The obvious aspect that this transfer can never be perfect but necessarily leads to performance differences is substantiated and explored in detail. The author also introduces a novel interpretation of the firing activity of neurons. He proposes a probabilistic interpretation of this activity and shows by means of formal derivations that stochastic neurons can sample from internally stored probability distributions. This is corroborated by the author's recent findings, which confirm that biological features like the high conductance state of networks enable this mechanism. The author goes on to show that neural sampling can be implemented on synthetic neuromorphic circuits, paving the way for future applications in machine learning and cognitive computing, for example as energy-efficient implementations of deep learning networks. The thesis offers an essential resource for newcomers to the field and an inspiration for scientists working in theoretical neuroscience and the future of computing.
    電子資源: http://dx.doi.org/10.1007/978-3-319-39552-4
評論
Export
取書館別
 
 
變更密碼
登入