Monday, March 19th, 2018

9:00 am10:00 am

In this talk, I'd like to discuss the intertwining importance and connections of the three principles of data science in the title. In particular, we demonstrate the power of transfer learning from ImageNet data to neuron measurements collected by the Gallant Lab.
We employ the predictability and stability principles and use deep nets (CNNs) to understand pattern selectivities of neurons in the difficult primate visual cortex V4. We achieve state-of-the-art prediction performance, and obtain interpretations of diverse V4 neurons through stable "deep tune" visualizations over multiple predictive models.

10:30 am11:30 am

Our brain’s functions and dysfunctions arise from neural activity at multiple spatiotemporal scales, from small-scale spikes of individual neurons to large-scale network activity measured through local field potentials (LFP) and electrocorticogram. Thus, developing new algorithms that can simultaneously model and decode multiple scales of activity is important for understanding neural mechanisms. Moreover, designing model-based algorithms for closed-loop control of neural activity with stimulation input is critical for establishing functional connectivity in neural circuits and devising precisely-tailored therapies for neurological disorders. We discuss some of our recent work on developing algorithms for modelling, decoding, and control of multiscale neural activity. We first present a multiscale dynamical modelling framework that identifies a unified low-dimensional latent state from hybrid spike-field activity. We show that the framework can combine information from multiple scales of activity recorded from monkeys, and model the different time-scales and statistical profiles across these scales. We then demonstrate that this framework allows us, for the first time, to decode mood variations over time from distributed multisite human brain activity. Further, we identify brain sites that are most predictive of human mood states. Finally, we present a new control-theoretic system-identification approach to characterize brain network dynamics (output) in response to electrical and optogenetic stimulation (input) within our dynamical model. To collect optimal input-output data for model estimation, we design a novel input waveform—a pulse train modulated by binary noise (BN) parameters such as pulse frequency—that we show is optimal for system-identification and conforms to clinical safety requirements. We apply the BN electrical and optogenetic stimulation waveforms in human and monkey experiments, respectively. We show that our estimated models can accurately predict both the dynamic and the steady-state neural response to stimulation input. These results show the feasibility of inferring complex brain states from distributed multiscale activity and identifying predictive dynamic models of neural response to stimulation. These algorithms could facilitate future closed-loop therapies for neurological disorders and help probe neural circuits.

Tuesday, March 20th, 2018

9:00 am10:00 am

With increasingly rich physiological and anatomical data sets, new opportunities are emerging for exploring cortical networks. For some time, multi-site recordings have been analyzed with dimensionality-reduction techniques to explore the structure of the neural code or the dynamics of neuronal ensembles. Examples include the relationship between pair-wise neuronal correlations and sensory coding; the analysis of high-order dynamical trajectories in ensemble activity; or manifold embeddings of sensory codes.

As the field of microscale connectomics matures, larger and more complete graphs of neural connectivity are becoming available. The simplest analyses of these networks have addressed the relationship between physiology and connectivity (functional connectomics), or strictly anatomical questions such as community detection. Alternatively, networks might be explored to uncover richer structures. Specifically, we have found that a manifold-embedding analysis of network connections yields surprisingly rich information about sensory representations and their transformation, even in the absence of physiological information. This approach works for both data-driven models of visual cortical networks and for the early stages of deep-learning networks, such as Alexnet.

More generally, I’ll explore the effectiveness of physiological vs. anatomical measurements for probing the structure of cortical networks; or how a combination of physiology and anatomy might be exploited.

10:30 am11:30 am

The mouse is an important model for studying vision because of the molecular, imaging, and genetic tools that are available. However, critics decry the relevance of the mouse model because classical assessments based on spatial frequency analysis imply that its visual capacity is of low quality. But this is at odds with observed visually-mediated behaviors, such as prey capture, that are highly precise. We will introduce experimental data obtained with a new stimulus class -- visual flow patterns -- that formally approximate natural visual scenes, such as 'running through grass'. These stimuli evoke visually-mediated responses well beyond those predicted by spatial frequency analysis. Novel imensionality reduction algorithms reveal neural activity implicating (i) both feedforward and feedback computations; (ii) the separation of flow- from grating-responses (iii) different states of the brain, and (iv) challenges to 'receptive field' (rather than network) models of visual cortex. Joint work with the Stryker lab, UCSF, carried out with Luciano Dyballa and Dr. Mahmood Hoseini.

1:30 pm2:30 pm

The large-scale reconstruction of synaptic-level wiring diagrams remains an attractive target for achieving greater understanding of nervous systems in health and disease. Progress has been severely limited due to technical issues involved in the imaging and analysis of nanometer-resolution brain imaging data. In this talk, we will discuss recent advances in using deep learning techniques (e.g., recurrent neural networks) and very large scale computation and storage capabilities in order to drive order-of-magnitude progress in automated analysis of 3d electron microscopy data. We will also discuss some of the biology that these projects are enabling, and prospects for making these tools and techniques widely available to neuroinformatics researchers.

TBA
3:00 pm4:00 pm

TBA

Wednesday, March 21st, 2018

9:00 am10:00 am

I will describe several recent cognitive and neuroscience experiments which quantify the information flow between organisms (including humans) and their environment. All the experiments support the assertion that the sensing-acting information is close to the optimal trade-off between minimum sensing information and future value. These are specific biological realizations of two general principles: (i) minimization of predictive information under a future value constraint; (ii) optimal information bottleneck between past and future information. These different settings, from the fly visual neural code, mice in the Morris Water Maze, the neural responses to auditory surprise in rats and humans, apparently follow the same information optimization principles.

10:30 am11:30 am

Most brain functions involve interactions among multiple, distinct areas or nuclei. For instance, visual processing in primates requires the appropriate relaying of signals across dozens of distinct cortical areas. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Here we investigate how trial-to-trial fluctuations of population responses in primary visual cortex (V1) are related to simultaneously-recorded population responses in area V2. Using dimensionality reduction methods, we find that V1-V2 interactions occur through a communication subspace: V2 fluctuations are related to a small subset of V1 population activity patterns, distinct from the largest fluctuations shared among neurons within V1. In contrast, interactions between subpopulations within V1 are less selective. We propose that the communication subspace may be a general, population-level mechanism by which activity can be selectively routed across brain areas [joint work with Joao Semedo, Amin Zandvakili, Byron Yu, and Adam Kohn]

1:30 pm2:30 pm

Abundance of recently obtained datasets on brain structure (connectomics) and function (neuronal population activity) calls for a theoretical framework that can relate them to each other and to neural computation algorithms. In the conventional, so-called, reconstruction approach to neural computation, population activity is thought to represent the stimulus. Instead, we propose that similar stimuli are represented by similar population activity vectors. From this similarity alignment principle, we derive online algorithms that can account for both structural and functional observations. Our algorithms perform online clustering and manifold learning on large datasets.

Thursday, March 22nd, 2018

10:30 am11:30 am

Neuroscience is experiencing a data revolution in which simultaneous recording of many hundreds or thousands of neurons is revealing structure in population activity that is not apparent from single-neuron responses. This structure is typically extracted from trial-averaged data. Single-trial analyses are challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. Here we introduce Latent Factor Analysis via Dynamical Systems (LFADS), a deep learning method to infer latent dynamics from single-trial neural spiking data. LFADS uses a nonlinear dynamical system (a recurrent neural network) to infer the dynamics underlying observed population activity and to extract ‘de-noised’ single-trial firing rates from neural spiking data. We apply LFADS to a variety of monkey and human motor cortical datasets, demonstrating its ability to predict observed behavioral variables with unprecedented accuracy, extract precise estimates of neural dynamics on single trials, infer perturbations to those dynamics that correlate with behavioral choices, and combine data from non-overlapping recording sessions (spanning months) to improve inference of underlying dynamics. In summary, LFADS leverages all observations of a neural population's activity to accurately model its dynamics on single trials, opening the door to a detailed understanding of the role of dynamics in performing computation and ultimately driving behavior.