Reconstruction of computational dynamics from neural measurements

A central tenet in theoretical neuroscience is that computations in the nervous system are implemented in terms of the (stochastic) neural system dynamics. For instance, working memory or decision making processes have theoretically often been characterized in terms of (stochastic) competition among multiple attractor states, and sequential processes (as in language or motor programs) have been explained through either limit cycle dynamics or transitions between saddle nodes (‘heteroclinic channels’; Rabinovich et al.). Therefore, presumably a lot of progress in understanding the neural basis of cognition could be made by unraveling from neural time series observations, like multiple single-unit recordings or neuroimaging data, the underlying computational dynamics. In my talk I will discuss several mathematical-statistical approaches toward this objective. These will include more ‘traditional’ model-free approaches based on delay embedding theorems and nonlinear basis expansions, as well as more recent approaches for directly inferring (in a maximum-likelihood sense) nonlinear dynamical system models (such as dynamically universal recurrent neural network and nonlinear time series models) from neural time series recordings (e.g. [1]). I will also illustrate application of these methods to various neurophysiological data sets, and the type of insights we may gain through them.

[1] Durstewitz D (2017). A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Identifying Computational Dynamics from Neural Measurements. PLoS Comp Biol 13 (6): e1005542.


Back to Program…