01-About CCNS

The Canadian Computational Neuroscience Spotlight (CCNS) was created following the mass cancellations and postponements of traditional neuroscience conferences during the early stages of the COVID-19 pandemic, including two such meetings amongst the Canadian neuroscience community. The absence of these meetings presented an opportunity to create a brand-new, entirely virtual academic meeting that could take full advantage of the online setting. Given that traditionally-defined trainees and early-career researchers were arguably most impacted by the cancellation of the networking and learning opportunities that conferences present, CCNS was designed as a “trainee-focused” meeting, highlighted by tutorial talks beginning each session, panel discussions with both established and early-career scientists, and a spotlight on trainee presentations.

The first edition of CCNS was planned and implemented entirely in ten weeks and yielded a meeting with more than 450 registrants, including representation from every continent across the globe. Perhaps most importantly, the limited costs of the virtual setting allowed the meeting to be completely free of charge for all attendees. Every element of the meeting remains available for replay online, another benefit of the virtual setting. This success served as the impetus for making CCNS a recurring academic meeting.

Going forward, CCNS will continue to highlight cutting-edge computational neuroscience research, both in Canada and around the world, while providing unique learning, networking, and presentation opportunities for early-career researchers. The meeting is committed to remaining cost-accessible to the entire academic community, using the virtual setting to maximize accessibility for populations for which physical conferences present a challenge, and maintaining a gender-balanced and diverse lineup of speakers during its continued evolution.

02 - Organizing Committee

About the Organizing Committee

Dr. Scott Rich (@RichCompNeuro, CCNS lead organizer) is currently a Postdoctoral Research Fellow and Neuron to Brain laboratory team lead at the Krembil Brain Institute. His research aims to elucidate the multi-scale interactions driving physiological brain activity by better understanding how they are disrupted in epilepsy, a distinctly multi-scale pathology. This work includes a focus on distinctly human epilepsy by integrating human electrophysiological data into the modeling process at varied spatial scales, ranging from biophysically detailed single neuron models to neural mass models.

Dr. Andreea Diaconescu (@cognemo_andreea, CCNS co-organizer) is currently an Independent Scientist at the Krembil Centre for Neuroinformatics at the Centre for Addiction and Mental Health. Leading the Cognitive Network Modelling team, her research focuses on developing cognitive tasks and computational models that address specific symptoms in psychiatry with a particular focus on delusions. In combination with neuroimaging and electrophysiological recordings, the aim is to assess the clinical utility of these models in prospective patient studies

Dr. John Griffiths (@neurodidact, CCNS co-organizer) is a cognitive and computational neuroscientist, with particular research interests in mathematical modelling of large-scale neural dynamics, multimodal neuroimaging data analysis methods, and brain stimulation in the context of neuropsychiatric and neurological disease. He is currently an Independent Scientist at the Krembil Centre for Neuroinformatics at CAMH, where he leads a team focused on whole-brain and multi-scale neurophysiological modelling. He is also an Assistant Professor in the University of Toronto Departments of Psychiatry and Institute of Medical Sciences.

Dr. Milad Lankarany (@MLankarany, CCNS co-organizer) is an Assistant Professor at the University of Toronto Instiute of Biomaterials and Biomedical Engineering, and a Scientist at the Krembil Brain Institute. The main focus of his lab’s work is to uncover information processing mechanisms of neural systems. His goal is to understand how information is represented, propagated and computed. Understanding neural information processing will result in the development of computational algorithms and engineering techniques for the optimal controlling of the functionality of neural systems. For example, closed loop neuro-stimulators can be used to adaptively intervene with the neural system of patients with neurological disorders in order to restore the normal activity.

03- Speaker Spotlight

Session Chairs for CCNSv2

Dr. Maurizio de Pitta: Maurizio De Pitta is joining the Krembil Research Institute, setting up his lab at the Institute by August 2021. He is currently ‘la Caixa’ Junior Leader Research Fellow at the Basque Center for Applied Mathematics in Bilbao, Spain where he carries out multidisciplinary research in Applied Mathematics with emphasis on translational and clinical topics in the Neurosciences. Maurizio pioneers computational approaches to quantify and replicate neuron-glial interactions in the brain, with the aim to elucidate their role in cognition and pathology. He is co-author of the first book on “Computational Glioscience” by Springer (2019).

Dr. Etay Hay: Dr. Etay Hay is an Independent Scientist at Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, and an Assistant Professor in the Department of Psychiatry and the Department of Physiology at the University of Toronto. Dr. Hay’s research uses computational models of cortical microcircuits to study the mechanisms of brain processing in health and mental disorders. Dr. Hay and his team integrate human cellular, circuit and gene-expression data to develop detailed computational models of human cortical microcircuits in health, depression and schizophrenia. The Hay lab uses the models to better understand the neuronal circuit mechanisms of brain function and mental health, test in silico new pharmacology for treatment, and identify high-resolution biomarkers in clinically-relevant brain signals to improve the diagnosis and monitoring of mental health.

Dr. Carmen Canavier: Carmen Canavier is the Mullins Professor and Chair, Department of Cell Biology and Anatomy, School of Medicine, Louisiana State University Health Sciences Center at New Orleans (LSUHSC-NO). She is a computational neuroscientist studying synchronization of pulse-coupled oscillators and the nonlinear dynamics of neurons and neural circuits. Her current interests focus on the dopamine neurons that participate in basal ganglia circuits and on gamma and theta oscillations in the hippocampus.

Dr. Bratislav Misic: Bratislav Misic is a mathematician with expertise in neuroimaging and network science. He joined McGill University in 2016, where he lads the Network Neuroscience Lab. His group studies how the links and interactions among brain areas support cognitive operations, complex behavior and global dynamics.

Speakers for CCNSv2

  • Dr. Richard Naud: Richard Naud is a computational neuroscience researcher and associate professor at the University of Ottawa. His work focuses on the information processing capabilities of cortical neural networks.
  • Dr. Gianluigi Mongillo: Gianluigi Mongillo received a M.Sc. degree in Physics (in 2000) and a Ph.D. degree in Neurophysiology (in 2005) from the University of Rome “La Sapienza,” (Rome, Italy). Since 2009, Gianluigi has been a research scientist with the Centre National de la Recherche Scientifique (CNRS).
  • Dr. Sukbin Lim: Sukbim Lim is an Assistant Professor of Neural Science, NYU Shanghai; Global Network Assistant Professor, NYU. Sukbim’s research uses a broad spectrum of dynamical systems theory, the theory of stochastic processes, and information and control theories, to develop and analyzes neural network models and synaptic plasticity rules for learning and memory.
  • Dr. Yulia Dembitskaya: Yulia Dembitskaya is a neuroscientist at the College de France’s Team in Dynamics and Pathophysiology of Neuronal Networks and a researcher at the Department of Molecular Neurobiology Shemyakin-Ovchinnikov Institute of Bioorganic Chemistry of the Russian Academy of Science in Moscow, Russia. Yulia’s research focuses on cellular neuroscience, specifically on neuron-glia interactions and how they take part in memory formation mechanisms in the normal and diseased brain, emphasizing Huntington’s, Parkinson’s, and Alzheimer’s diseases.
  • Dr. Michael London: Mickey London is a PI at the Edmond and Lily Safra center for brain science (ELSC) at the Hebrew University of Jerusalem, Israel. His lab studies dendritic computation and neural coding using a combination of theoretical and experimental approaches. Currently, the lab focuses on using a machine learning approaches to study computations in single neurons. The lab also looks at the effect of neuromodulation on dendritic excitability and the role of cortical control on mice ultrasonic vocalization.
  • Dr. Alexandre Guet-McCreight: Alexandre Guet-McCreight is a postdoctoral research fellow at the Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, where he is a member of Dr. Etay Hay’s Brain Circuits Lab. As part of the Hay Lab team, he generates biophysically detailed models of human neurons, which he uses to construct human cortical circuit models. These human circuit models are then used for testing antidepressant pharmacology in silico, as well as for gaging the effects of age-associated neuronal changes on cortical sensory processing.
  • Dr. Steve Prescott: Dr. Steve Prescott is a Senior Scientist at the Hospital for Sick Children and a Professor in Physiology and Biomedical Engineering at the University of Toronto. His lab studies how biophysical properties shape neural coding strategies, and seeks to understand how neural coding become pathologically altered in chronic disease states such as neuropathic pain.
  • Dr. Paola Malerba: Paola Malerba is an applied mathematician that has dedicated her work to understanding how neural networks produce transient organized brain rhythms. She is driven by curiosity about how those rhythms arise, how different rhythms interact and what role they might play in brain function. Her interdisciplinary expertise starts from dynamical systems and includes collaborations with behavioral neuroscientists, psychologists, electrophysiologists and clinicians. Her publication record includes modeling electrophysiology of single cells, biophysical network oscillations, and spatio-temporal patterns in human EEG.
  • Dr. Jonathan Rubin: Since receiving his PhD in Applied Math from Brown University in 1996, Jonathan Rubin has been engaged in research in mathematical neuroscience and other topics at the interface of biology and dynamics, with specializations in bursting, rhythm generation, multiple timescale dynamics, and basal ganglia/Parkinson’s disease. Rubin has mentored over 20 PhD students and postdocs and is a member of the 2021 Class of SIAM Fellows.
  • Dr. Alexandra Chatziklaymniou: Alexandra Chatzikalymniou is a postdoc at Krembil RI moving to Stanford University in June. She studies the generation of theta, gamma and ripple oscillations, and how these rhythms encode information of place cell activity during navigation. She also investigates generation of Evoked Response Potential (ERPs) in primates. Her work with network models of unique biophysical realism offers a view to the role of distinct cell types which so far has been elusive.
  • Dr. André Longtin: Andre Longtin is a Professor of Physics at the University of Ottawa, and co-director of its Centre for Neural Dynamics. He works in nonlinear dynamics, stochastic dynamics and nonlinear time series analysis. He also pursue related research in computational and theoretical neuroscience from synapses to whole brain activity and behaviour, including sensory coding, feedback, memory, rhythms, active sensing and model inference through machine learning.
  • Dr. Emma Towlson: Emma Towlson is an Assistant Professor at the University of Calgary, with appointments in the Department of Computer Science, the Department of Physics and Astronomy, and the Hotchkiss Brain Institute. She is a network neuroscientist who believes that complex systems science is at the heart of understanding our interconnected world and the organisms that share it.
  • Dr. Amy Kuceyski: Amy Kuceyeski is an Associate Professor of Mathematics in the Radiology Department at Weill Cornell Medicine and PI of the Computational Connectomics (CoCo) Lab. For the past decade, Amy has been interested in understanding how the human brain works in order to better diagnose, prognose and treat neurological disease and injury. The CoCo lab’s main focus is on using quantitative methods, including machine learning, applied to multi-modal neuroimaging data to map brain-behavior relationships and boost recovery after neurological disease or injury.
  • Dr. Caio Seguin
  • Dr. Richard Betzel

Past Speakers

The keynote speakers for the 2020 edition of CCNS were Dr. Nancy Kopell, Dr. Dimitris Pinotsis, Dr. Cameron McIntyre, and Dr. Philip Corlett.

04 - CCNS 2021 Program

CCNSv2 will take place on May 17-18, 2021, and registration is open here. The meeting’s theme will be “Bridging the Spatial Scales of Computational Neuroscience”.

Program

Monday, May 17

  • 8:45 AM EDT: Welcome and Introduction, Dr. Scott Rich
  • 9:00 AM EDT: Tutorial Talk, Synaptic function in several declinations, Dr. Maurizio De Pitta
  • 9:30 AM EDT: Research Talk, Theory and model-based inference of short-term plasticity, Dr. Richard Naud
  • 10:00 AM EDT: Research Talk, Different sites for synaptic storage and their impact on a netowrk’s memory capacity, Dr. Gianluigi Mongillo
  • 10:35 AM EDT: Research Talk, Efficient inference of synaptic learning rule from in vivo data, Dr. Sukbin Lim
  • 11:15 AM EDT: Keynote Address, Lactate supply overtakes glucose when neural computational and cognitive loads scale-up, Dr. Yulia Dembitskaya
  • 12:00 PM EDT: Panel Discussion, Drs. Alexandre Guet-McCreight, Etay Hay, Amy Kuceyeski, and Jonathan Rubin
  • 12:30 PM EDT: Lunch Break
  • 1:00 PM EDT: Keynote Address, Dr. Michael London
  • 2:00 PM EDT: Break
  • 2:05 PM EDT: Research Talk, Effects of reduced inhibition in depression on simulated cortical processing and EEG signals, Dr. Etay Hay
  • 2:35 PM EDT: Tutorial Talk, Optimizing detailed models of human neurons with bluePyOpt, Dr. Alexandre Guet-McCreight
  • 3:10 PM EDT: Break
  • 3:15 PM EDT: Research Talk, Spike timing precision: roles of neuronal, circuit and stimulus properties, Dr. Steve Prescott
  • 4:00 PM EDT: Trainee Spotlight, Giulia Baracchini and Heng Kang Yao
  • 4:20 PM EDT: Trainee Talks 1, Neural Population Modeling— Lucy Owen, Maryan Hasanzadeh Mofrad, Pankaj Gupta, Zachary Peter Friedenberger
  • 4:20 PM EDT: Trainee Talks 2, Neural Population Modeling, in silico— Davor Curic, Quynh-Anh Nguyen, Syed Hussain Ather
  • 4:20 PM EDT: Trainee Talks 3, Neural Population Modeling, Mathematical— Abel Sagodi, Ayush Mandwal, Samy Castro, Liang Chen
  • 4:20 PM EDT: Trainee Talks 4, Single Cell Modeling— Afroditi Talidou, Benjamin Barlow, Chitaranjan Mahapatra, Marina Chugnova

Tuesday, May 18

  • 9:00 AM EDT: Trainee Spotlight, Evan Cresswell-Clay and Gabriel Benigno
  • 9:20 AM EDT: Trainee Talks 1, Computational Psychiatry— Daniel J. Hauke, Gabrielle Allohverdi, Christoph Metzner, Navona Calarco
  • 9:20 AM EDT: Trainee Talks 2, Connectomics— Faranak Heidari, Peter Bedford, Rejwan Salih, Yohan Yee
  • 9:20 AM EDT: Trainee Talks 3, Micro-level Modeling— Carter Johnson, Giulio Bonifazi, Pascal Helson
  • 10:00 AM EDT: Tutorial Talk, Oscillatory mechanisms, Dr. Carmen Canavier
  • 10:20 AM EDT: Research Talk, Circuit mechanisms of hippocampal reactivation during sleep, Dr. Paola Malerba
  • 10:45 AM EDT: Research Talk, Breathe easy: the roles of bursting and non-bursting neurons in robust respiratory rhythm generation, Dr. Jonathan Rubin
  • 11:10 AM EDT: Break
  • 11:35 AM EDT: Research Talk, On the generation of theta rhythms in the hippocampus, Dr. Alexandra Chatziklaymniou
  • 12:00 PM EDT: Keynote Talk, Modeling bursty rhythms and their impact on communication between brain areas, Dr. Andre Longtin
  • 1:00 PM EDT: Panel Discussion, Drs. Alexandra Chatzikalymniou, Andre Longtin, Paola Malerba, and Steve Prescott
  • 1:30 PM EDT: Lunch Break
  • 2:00 PM EDT: Tutorial Talk, Introduction to network neuroscience, Dr. Bratislav Misic
  • 2:20 PM EDT: Research Talk, Maximizing subnetwork engagement via connectome-wide individualized target search, Dr. Emma Towlson
  • 2:55 PM EDT: Research Talk, Psychadelics flatten the brain’s energy landscape, Dr. Amy Kuceyeski
  • 3:30 PM EDT: Research Talk, Dr. Caio Seguin
  • 4:05 PM EDT: Keynote Address, Edge-centric connectomics, Dr. Richard Betzel
  • 4:50 PM EDT: Group Discussion and Meeting Conclusion

Trainee Abstracts

Monday, May 17

  • Spotlight, Giulia Baracchini: Neuronal variability patterns promote the formation and organization of neural circuits. Macroscale similarities in regional variability patterns may therefore be linked to the strength and topography of inter-regional functional connections. To assess this relationship, we used multi-echo resting-state fMRI and investigated macroscale connectivity-variability associations in 154 adult humans (86 women; mean age = 22yrs). We computed inter-regional measures of moment-to-moment BOLD signal variability and related them to inter-regional functional connectivity. Region pairs that showed stronger functional connectivity also showed similar BOLD signal variability patterns, independent of inter-regional distance and structural similarity. Connectivity-variability associations were predominant within all networks and followed a hierarchical spatial organization that separated sensory, motor and attention systems from limbic, default and frontoparietal control association networks. Results were replicated in a second held- out fMRI run. These findings suggest that macroscale BOLD signal variability is an organizational feature of large-scale functional networks, and shared inter-regional BOLD signal variability may underlie macroscale brain network dynamics.
  • Spotlight, Heng Kang Yao: Cortical processing depends on finely-tuned excitatory and inhibitory connections in neuronal microcircuits. Reduced inhibition by somatostatin-expressing interneurons is a key component of altered inhibition associated with treatment-resistant major depressive disorder (depression), which is implicated in cognitive deficits and rumination, but the link remains to be better established mechanistically in humans. Here, we tested the impact of reduced somatostatin interneuron inhibition on cortical processing in human neuronal microcircuits using a data-driven computational approach. We integrated human cellular, circuit and gene-expression data to generate detailed models of human cortical microcircuits in health and depression. We simulated microcircuit baseline and response activity and found reduced signal-to-noise ratio and increased false/failed detection of stimuli due to a higher baseline activity in depression. Our results thus applied novel models of human cortical microcircuits to demonstrate mechanistically how reduced inhibition impairs cortical processing in depression, providing quantitative links between altered inhibition and cognitive deficits.
  • Neural Population Modeling, Lucy Owen: How do high-order brain network dynamics support narrative understanding? To better understand how dynamic interactions between brain structures support our ongoing thoughts, we examined high-order dynamic correlations of neural data. To do this, we developed a model of neural dynamics that takes in a feature timeseries and outputs estimated first-order network dynamics (i.e., dynamic functional correlations), second-order network dynamics (reflecting the interactions of homologous networks), and higher-order network dynamics (up to tenth-order dynamic correlations). Our approach combines a kernel-based method for estimating the dynamic functional correlations that are similar (within task) across participants, along with a dimensionality reduction approach that enables us to efficiently compute high-order correlations in the data. After validating our model using synthetic data, we applied our approach to an fMRI dataset collected by Simony et al. (2016) as participants listened to an audio recording of a story, as well as scrambled versions of the same story (where the scrambling was applied at different temporal scales). We trained classifiers to take the output of the model and decode the timepoint in the story (or scrambled story) that the participants were listening to. We found that, during each of the listening conditions in the experiment, classifiers that incorporated second-order correlations yielded consistently higher accuracy than classifiers trained only on lower-order patterns. By contrast, these higher-order correlations were not necessary to support (minimally above chance) decoding during a control rest condition. This suggests that the cognitive processing that supported the listening conditions involved second-order (or higher) network dynamics.
  • Neural Population Modeling, Maryan Hasanzadeh Mofrad: In the two-stage model of memory consolidation, memories are first formed in the hippocampus and then transferred to neocortex for long-term storage. This transfer is mediated by synaptic plasticity, for which the co-occurrence of pre- and post-synaptic activity plays a critical role. While it has become increasingly clear that sleep oscillations actively and causally contribute to this process, it remains unclear to what extent these oscillations coordinate activity across areas in neocortex. Based on early recordings in animals under anesthesia, sleep-like rhythms were generally considered to occur across a wide range of cortex, creating a state of large-scale synchrony. Recent work, however, has challenged this idea by reporting isolated sleep rhythms such as local slow-oscillations and spindles. What is the spatial scale of sleep rhythms in cortex? To answer this question, we adapted deep learning algorithms initially developed for detecting earthquakes and gravitational waves in high-noise settings to analysis of neural recordings in sleep. We studied sleep spindles in non-human primate electrocorticogram (ECoG), human electroencephalogram (EEG), and clinical intracranial electroencephalogram (iEEG) recordings. Our approach, which detects a range of clearly formed large- and small-amplitude spindles during sleep, reveals that sleep spindles co-occur over a broad range of cortex. In particular, multi-area spindles, where more than 10 electrode sites exhibit this rhythm simultaneously, are much more frequent than previously estimated by amplitude-thresholding approaches, which tend to select only the highest-amplitude spindles and can miss events that temporarily fall below threshold. Because spindles are intrinsically related to sleep-dependent consolidation of long-term memories, these results have important implications for the distribution of engrams in cortex of primate and humans. We lastly present results from human EEG under varying memory loads and show our method detects an increase in multi-area spindles following high-load visual working memory tasks.
  • Neural Population Modeling, Pankaj Gupta: Mice can learn to control specific neuronal ensembles using sensory (eg. auditory) cues (Clancy et al. 2014) or even artificial optogenetic stimulation (Prsa et al. 2017). In the present work, we measure mesoscale cortical activity with GCaMP6s and provide graded auditory feedback (within ~100 ms after GCaMP fluorescence) based on changes in dorsal-cortical activation within specified regions of interest (ROI)s with a specified rule. We define a compact, low-cost optical brain-machine-interface (BMI) capable of image acquisition, processing, and conducting closed-loop auditory feedback and rewards, using a Raspberry Pi. The changes in fluorescence activity (ΔF/F) are calculated based on a running baseline. Two ROIs (R1, R2) on the dorsal cortical map were selected as targets. We started with a rule of ‘R1-R2’ (ΔF/F of R1 minus ΔF/F of R2) where the activity of R1 relative to R2 was mapped to frequency of the audio feedback and if it were to cross a set threshold, a water drop reward is generated. To investigate learning in this context, water-deprived tetO-GCaMP6s mice (N=18) were trained for 30-minutes every day on the system for several days, with a task to increase audio frequency leading to reward. We found that mice could modulate activity in the rule-specific target ROIs to get an increasing number of rewards over days. Analysis of the reward-triggered ΔF/F over time indicated that mice progressively learned to activate the cortical ROI to a greater extent. In conclusion, we developed an open-source system (to-be released) for closed-loop feedback that can be added to experimental scenarios for brain activity training and could be possibly effective in inducing neuroplasticity.
  • Neural Population Modeling, Zachary Peter Friedenberger: Recent theoretical work suggests that populations of neurons may communicate multiple streams of information simultaneously. This hypothesized neural code is bivariate, such that the probability of bursting (burst fraction) encodes contextual information (e.g., attention) and the number of events encodes sensory information (e.g., visual features). However, it has yet to be shown that populations of neurons utilize this form of burst coding in vivo. To this end, we analyzed electrophysiology data from head-fixed mice undergoing a go/no-go natural scene change detection task. Using activity from the primary visual cortex, we tested whether there existed significant modulation of these signals centered around the novel image change and reward delivery. We found that changes in burst fraction upon image change were small and could be explained by either random shuffling of bursts or a univariate neural code (e.g., Poisson process). These results were consistent across cortical layers. No significant change in burst fraction was associated with hit or missed change image. Additionally, we found that changes in burst fraction were not significant at reward delivery. These preliminary results suggest that if there is a bivariate neural code it may not be responsible for novelty, communicating reward, or error-correcting signals. However, to fully rule out the possibility of bivariate neural code, different timescales, task structures, and brain regions still need to be investigated.
  • Neural Population Modeling (in silico), Davor Curic: Does the brain optimize itself for storage and transmission of information and if so, how? Based on observations that neurons in a resting brain and neurons in cultures often exhibit scale-free collective dynamics in the form of information cascades deemed “neuronal avalanches”, the physics based critical brain hypothesis posits that the brain self-tunes to a critical point or regime to achieve optimality. This critical point or regime separates in particular seizure-like states from coma-like ones. Yet, how such optimality of information transmission is related to behaviour and whether it persists under behavioural transitions is an open challenge in the field. Here, we tackle it by studying behavioural transitions in mice using two-photon calcium imaging of the retrosplenial cortex. This area of the brain is well positioned to integrate sensory, mnemonic, and cognitive information by virtue of its strong connectivity with the hippocampus, medial prefrontal cortex, and primary sensory cortices. Our work shows for the first time that the response of the underlying neural population to behavioural transitions can vary significantly between different subpopulations such that one needs to take the structural and functional network properties of these subpopulations into account to understand the properties at the total population level. Specifically, we show the retrosplenial cortex contains at least one subpopulation capable of switching between two different scale-free regimes, indicating an intricate relationship between behaviour and the optimality of neuronal response at the subgroup level. This asks for a reinterpretation of the emergence of self-organized criticality in neuronal systems.
  • Neural Population Modeling (in silico), Quynh-Anh Nguyen: Intermittent synchronization in a pyramidal-interneuron gamma network Quynh-Anh Nguyen (1), Leonid L. Rubchinsky (1), (2) (1) Department of Mathematical Sciences, Indiana University Purdue University Indianapolis, Indianapolis, IN, United States (2) Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, IN, United States Gamma synchronization plays a significant role in many cognitive functions such as perception or memory formation. Synchronization is not constant but rather fluctuates over time. We use a conductance-based pyramidal-interneuron network to study neuronal synchrony in gamma frequency band. Excitatory synapses can facilitate temporal cohesion while inhibitory synapses, with longer effective time window, can regulate the spiking rhythm and thus phase-locking excitatory and inhibitory neurons together [2]. Moreover, while both healthy and diseased brains have many short intervals of desynch, there are subtle differences in their duration distributions [1, 4]. Thus, we aim to investigate how excitation and inhibition in a network affect the temporal pattern of synchrony as well as functional implication of various desynch distributions. Acknowledgement: This work receives support from NSF DMS 1813819 References 1. E. Malaia, S. Ahn, L.L. Rubchinsky. Dysregulation of temporal dynamics of synchronous neural activity in adolescents on autism spectrum. Autism Research 13: 24-31, 2020 2. G. Buzsáki, Rhythms of the Brain. Oxford University Press, 2006. 4. S. Ahn, L.L. Rubchinsky, C.C.Lapish. Dynamical reorganization of synchronous activity patterns in prefrontal cortex - hippocampus networks during behavioral sensitization. Cerebral Cortex 24: 2553-2561, 2014
  • Neural Population Modeling (in silico), Syed Hussain Ather: Connectome-based neural mass modelling is the emerging computational neuroscience paradigm for modelling large-scale network dynamics observed in whole-brain activity measurements such as fMRI and M/EEG. Estimating physiological parameters by fitting these models to empirical data is challenging however, due to large network sizes, (often fairly) physiologically detailed system equations, and need for long simulation runs (e.g. 10+ minutes of simulated data at sub-millisecond integration step sizes). Here we introduce a novel approach to connectome-based neural mass model parameter estimation, by employing optimization tools developed for deep learning. We cast the system of differential equations representing both neural and haemodynamic activity dynamics as a deep neural network, which allows us to use off-the-shelf algorithmic differentiation-based methods for parameter estimation. The approach is demonstrated using the two-dimensional reduced Wong-Wang equations introduced by Deco and colleagues, also known as the Dynamic Mean Field model. Additional optimization constraints are explored and prove fruitful, including use of an analytically derived Jacobian, and restricting the model to domains of parameter space yielding metastable dynamics. Training of the network proceeds on the basis of this input/output mapping of stochastic noise to connectivity patterns. Using this approach, we first demonstrate robust recovery of physiological model parameters in synthetic data, and then as a proof-of-principle apply the framework to modelling of empirical resting-state fMRI data from the Human Connectome Project database. Future work will explore generalization to other models of neural dynamics, to task-evoked activity, and to non-fMRI measurement types.
  • Neural Population Modeling (Mathematical), Abel Sagodi: Dynamical systems have played a huge role in modeling neural dynamics. Conley index theory has been used to analyze the topological structure of invariant sets of diffeomorphisms and of smooth flows. A combinatorial version of the theory can help to describe and capture the global qualitative behavior of dynamical systems. In this paper, we show applications of (combinatorial) Conley index theory to certain models used in theoretical neuroscience. The application of Conley index theory is a demonstration of the usefulness of the continuation graph to analyze computational models in neuroscience, though which we can determine which parameter values result in what kinds of behavior. We analyze the dynamics of two connected excitatory-inhibitory Wilson-Cowan networks over a range of possible parameter values. With this, we investigate which parameters result in the occurrence of seizures in this simple epilepsy model. In the Wilson-Cowan network, a seizure is defined as a state with a higher (average) activity than the baseline, i.e. the fixed point of the system. Further, we identify a parameter value for which the Wilson-Cowan network exhibits chaos, i.e. for which one can find a Poincaré section from whose Poincaré map there exists a surjection onto a subshift of finite type on two symbols.
  • Neural Population Modeling (Mathematical), Ayush Mandwal: Within the classical eyeblink conditioning, Purkinje cells within the cerebellum are known to suppress their tonic firing rates for a well-defined time period in response to the conditional stimulus after training. The temporal duration of the drop in tonic firing rate depends on the time interval between the onsets of the conditional and unconditional training stimuli. Direct stimulation of parallel fibers and climbing fiber by electrodes was found to be sufficient to reproduce the same characteristic behavior of Purkinje cells. In addition, the specific metabotropic glutamate-based receptor type 7 (mGluR7) was found responsible for the initiation of the response, suggesting an intrinsic mechanism within Purkinje cells for temporal learning. To understand time-encoding memory formation within individual Purkinje cells, we propose a biochemical mechanism that tries to answer key aspects of the “Coding problem” of Neuroscience. According to the proposed mechanism, the time memory is encoded within the dynamics of a set of proteins: mGluR7, G-protein, G-protein coupled Inward Rectifier Potassium ion channel, Protein Kinase A, and Protein Phosphatase 1 which self-organize themselves into a protein complex. The intrinsic dynamics of these protein complexes can differ and thus can encode different time durations during which Purkinje cell suppresses its own tonic firing rate. The time memory is encoded within their effective dynamics and altering their dynamics means storing a different time memory. The proposed mechanism is verified by both a minimal and a comprehensive mathematical model of the conditional response behavior of the Purkinje cell which yields testable experimental predictions.
  • Neural Population Modeling (Mathematical), Samy Castro: The activity of the cortex in mammals constantly fluctuates in relation to cognitive tasks but also during rest. Brain regions' ability to display ignition, a fast transition from low to high activity, is central for the emergence of conscious perception and decision-making. Ignition was simulated using the mean-field whole-brain model. It was assessed how ignition differs between a reference model embedding a realistic human connectome and alternative models based on a variety of randomized connectome ensembles. Here, we investigate how the intrinsic propensity of different regions to get ignited is determined by the structural connectome’s specific topological organization. We found that ignition in the human cortex is first triggered on the strongest and most densely interconnected cortical areas–the “ignition core” –, emerging at the lowest global excitability value compared to surrogate connectomes. Furthermore, when increasing the strength of excitation, the propagation of ignition outside of this initial core–which can self-sustain its high activity–is way more gradual than for any of the randomized connectomes, allowing for graded control of the number of ignited regions. We explain both these assets in terms of the exceptional weighted core-shell organization of the empirical connectome. Finally, we speculate that developmental and evolutionary constraints are in the roots of this mesoscale organization that support this enhanced ignition dynamics in cortical activity.
  • Neural Population Modeling (Mathematical), Liang Chen: Hysteresis dynamics has been described in a vast number of biological experimental studies. Many such studies are phenomenological and a mathematical appreciation has not attracted enough attention. In the paper, we explore the nature of hysteresis and study it from the dynamical system point of view by using the bifurcation and perturbation theories. We firstly make a classification of hysteresis according to the system behaviours transiting between different types of attractors. Then, we focus on a mathematically amenable situation where hysteretic movements between the equilibrium point and the limit cycle are initiated by a subcritical Hopf bifurcation and a saddle-node bifurcation of limit cycles. We present a analytical framework by using the method of multiple scales to obtain the normal form up to the fifth order. Theoretical results are compared with time domain simulations and numerical continuation, showing good agreement. Although we consider the time-delayed FitzHugh-Nagumo neural system in the paper, the generalization should be clear to other systems or parameters. The general framework we present in the paper can be naturally extended to the notion of bursting activity in neuroscience where hysteresis is a dominant mechanism to generate bursting oscillations.
  • Single Cell Modeling, Afroditi Talidou: The generation of an action potential that propagates along a nerve axon has been a problem of significant interest since the early ’50s. In this talk, I will discuss the FitzHugh-Nagumo model on a surface of a long, thin cylinder that represents the axonal membrane of a single neuron. This model is a system of a partial differential equation coupled with an ordinary differential equation in two dimensions (plus time). Key questions are the existence of a pulse – a special solution that travels along the length of the axon – and its stability under small perturbations of the initial conditions and the geometry.
  • Single Cell Modeling, Benjamin Barlow: Impact of Sodium Channel Distribution in the Axon Initial Segment on the Initiation and Backpropagation of Action Potentials: We are interested in the biophysics of forward and backward propagation of action potentials (APs), as they are both important for learning. The axon initial segment (AIS) initiates APs in a variety of neurons. Pyramidal cells contain two types of voltage-gated sodium channel: Nav1.2 (high threshold) and Nav1.6 (low threshold). These channels are not uniformly distributed in the AIS. The density of Nav1.2 is greatest near the soma, and Nav1.6 density peaks further down the AIS, away from the soma. While this distribution is observed, its purpose remains unclear. Counterintuitively, published simulations suggest that concentration of high threshold channels near the soma lowers the threshold for backpropagation. We find that this is true when stimulating at the axon. However, the opposite is true when stimulating at the soma. We propose an intuitive explanation: the cell becomes more excitable —including backpropagation— when the Nav distribution places more low-threshold channels closer to the site of stimulation. Our results suggest that the observed distribution increases the backpropagation threshold. We use the time-evolution of integrated Nav current to support this description, and discuss the effect of parameters such as AIS geometry. *Funded by NSERC (Canada).
  • Single Cell Modeling, Chitaranjan Mahapatra: Quantitative study of action potential propagation in a one-dimensional strand of detrusor smooth muscle cell Urinary incontinence is associated with enhanced spontaneous action potentials (sAPs) of the detrusor smooth muscle (DSM). One of the key properties of APs is their non-attenuating propagation along lengths of cable-like structures such as axons and muscle cells. To ascertain whether an AP would exhibit this property, we constructed a one-dimensional cable model of DSM by linking five cells end-to-end, the electrical connectivity being provided via gap junctions. The computational AP is simulated from a previously designed biophysically constrained single DSM cell, which consists nine ion channels. The peak amplitude of the propagated APs is higher than the evoked control AP due to more charge dissipation to neighbouring segments in both directions for the evoked one. The convex-upward foot of the AP at stimulus injection point is converted into a concave-upward foot in the propagated APs, as expected from theory owing to the effect of cable properties. Given these findings, it can be hypothesized that some proportion of the varied AP shapes in DSM cells mentioned in the published experimental recordings could be explained on the basis of whether, APs were recorded at or close to the locus of neurotransmitter action or at a distance from the locus. Our exploration of spike propagation by its incorporation into a 1-D model is a preliminary one, a biophysically realistic 3-D model is essential for a more physiologically realistic investigation.
  • Single Cell Modeling, Marina Chugnova: Facilitation of bursting by the iSite in GnRH neurons. Located in hypothalamus, the gonadotropin-releasing hormone (GNRH) neurons trigger the reproductive axis by the synchronized release of gonadotropin-releasing hormone. Dendrite structure of GnRH neurons is highly unusual. One of distinct properties of the GnRH neuron dendrite is presence of a special segment, called iSite, located at about 100 microns from the soma, with significantly higher density of the sodium channels. More than 30 percent of action potential is initiated at the location of this segment versus the initiation at the soma. We conjecture that the main purpose for the iSite structure is rather the facilitation of bursting than the initiation of the action potential. Our conclusions are supported by the computer modeling results.

Tuesday, May 18

  • Spotlight, Evan Cresswell-Clay: Astrocytes' complex branched anatomy is increasingly regarded as a structural component of the cell’s functional specification. On the other hand, there is no understanding of whether and to what extent the astrocyte’s anatomy affects the cell’s functional readout in terms of its intracellular calcium signaling. We thus set out to deploy quantitative characterization of branching anatomy of hippocampal astrocytes, using descriptive statistics for multidimensional morphology data. Unsupervised clustering is then deployed to identify putative structural signatures for different categories of astrocytic anatomy. Based on such categories, we deploy generative modeling to grow different lines of astrocytes in silico, each characterized by its statistics of morphological parameters. Different experimental-like protocols then probe the artificial astrocytes to seek a characterization of the spatiotemporal properties of calcium signals at multiple intracellular scales – from microdomains to mesoscopic motifs and whole branches. Such analysis provides a template of calcium signaling patterns that can unequivocally put in relation to anatomical features and accounts for the richness of astrocytic dynamics documented in the literature.
  • Spotlight, Gabriel Benigno: Sensory neuroscience has focused a great deal of its attention on characterizing the mean firing rate that is evoked by a stimulus, and while it has long been recognized that the firing rates of individual neurons fluctuate around the mean, these fluctuations are often treated as a form of internally generated noise. There is, however, evidence that these ongoing fluctuations of activity in sensory cortex during normal, waking function shape neuronal excitability and responses to external input. We have recently found that spontaneous fluctuations are organized into waves traveling at speeds consistent with the speed of action potentials traversing unmyelinated horizontal cortical fibers (0.1-0.6 m/s) across the cortical surface (Davis, Muller et al., Nature, 2020). These waves systematically modulate excitability across the retinotopic map, strongly affecting perceptual sensitivity as measured in a visual detection task. The underlying mechanism for these waves, however, is unknown. Further, it is unclear whether waves are consistent with the low rate, highly irregular, and weakly correlated asynchronous-irregular dynamics observed in computational models and cortical recordings in vivo. Here, we study a large-scale computational model of a cortical sheet, with connections ranging up to biological scales. Using an efficient custom simulation framework, we study networks with topographically-organized connectivity and distance-dependent axonal conduction delays from several thousand up to one million neurons. We find that spontaneous traveling waves are a general property of these networks and are consistent with the asynchronous-irregular regime. These waves are well matched to spontaneous waves recorded in the neocortex of awake monkeys. Further, the increase in spiking probability during waves is low, yielding a sparse-wave regime that offers a unique operating mode, where traveling waves coexist with locally asynchronous-irregular dynamics, without inducing deleterious neuronal correlations.
  • Computational Psychiatry, Christoph Metzner: Abnormalities in the synchronized oscillatory activity of neurons in general and, specifically in the gamma band, might play a crucial role in the pathophysiology of schizophrenia. While these changes in oscillatory activity have traditionally been linked to alterations at the synaptic level, we demonstrate here, using computational modelling, that common genetic variants of ion channels can contribute strongly to this effect. Our model of primary auditory cortex highlights multiple schizophrenia-associated genetic variants that reduce gamma power in an auditory steady-state response task. Furthermore, we show that combinations of several of these schizophrenia-associated variants can produce similar effects as the more traditionally considered synaptic changes. Overall, our study provides a mechanistic link between schizophrenia-associated common genetic variants, as identified by genome-wide association studies, and one of the most robust neurophysiological endophenotypes of schizophrenia.
  • Computational Psychiatry, Daniel J. Hauke: Paranoid delusions (PD) are a frequent and burdensome symptom in early psychosis. However, how PD emerge is still unclear. Here, we investigate the computational mechanisms of emerging paranoia. We analyzed 15 unmedicated first episode psychosis patients (FEP), 16 individuals at clinical high-risk for psychosis (CHR), and 16 healthy controls (HC). We examined participants’ behavior during a social learning task designed to probe learning about others’ intentions. We formulated two competing hypotheses comparing the standard Hierarchical Gaussian Filter (HGF) with a mean-reverting HGF, formalizing the notion that emerging psychosis may be characterized by altered perception of environmental volatility. Behaviorally, FEP displayed reduced flexibility to take volatility into account (group-by-volatility interaction: F = 4.27, p = 0.020). The winning model was the standard HGF for HC (φ = 86%, f = 0.95) and CHR (φ = 94%, f = 0.79), but the mean-reverting HGF for FEP (φ > 99%, f = 0.95), in line with an altered perception of volatility in this population. Post-hoc tests of model parameters indicated that FEP perceived the environment as more volatile over time (p = 0.013) and showed a reduced coupling strength between hierarchical levels (p = 0.036) compared to HC. Our results suggest that FEP may be characterized by a different computational mechanism – perceiving the environment as increasingly volatile – which may give rise to aberrantly salient prediction errors in psychosis. Furthermore this modeling approach may prove useful to identify vulnerability for transition to psychosis as we observed greater heterogeneity in CHR.
  • Computational Psychiatry, Gabrielle Allohverdi: Delusions and auditory hallucinations are debilitating psychosis symptoms. In schizophrenia, the auditory mismatch negativity (MMN) component, a signature of surprise in response to unpredictable tones, has been consistently shown to be reduced and is a biomarker of psychosis. Under a Bayesian framework of learning, this is believed to reflect aberrant updating of the brain’s hierarchical predictive model from auditory sensory inputs. This impairment in predictive updating can be mimicked through NMDAR antagonism, which creates a similar reduction in the auditory MMN via changes in AMPA and NMDA-receptor plasticity. Downward NMDAR signaling is imperative to perceptual updating, sending predictions and precisions to lower levels that in turn, send ascending prediction errors. To further understand these computational and neural mechanisms, we investigated the modulatory effects of ketamine on perceptual predictions during a roving auditory MMN paradigm. We applied the Hierarchical Gaussian Filter (HGF) to single-trial EEG data obtained from healthy volunteers that were administered s-ketamine in a placebo-controlled study. We find that ketamine significantly reduces the effects higher-level prediction errors (ε_3) and posterior precisions (π ̂_1) without impacting predictions or lower-level prediction errors (ε_2). Further dynamic causal modelling of these effects revealed that ketamine impacts connectivity of the secondary to primary auditory cortex and right inferior frontal to secondary auditory cortex, suggesting impairment in top down precision-related signaling. These findings compliment current evidence that overly uncertain prior beliefs and increased sensory precision explain aberrant perceptual inference in early psychosis and encourage further investigation into the effects of pharmacological interventions that impact NMDA-receptor antagonism.
  • Computational Psychiatry, Navona Calarco: Title: Multivariate associations among white matter microstructure, neurocognition, and social cognition in people with a schizophrenia spectrum disorder and healthy control participants Authors: Navona Calarco, Lindsay D. Oliver, Michael Joseph, MohammedReza Ebrahimi, Colin Hawco, Erin W. Dickie, Pamela DeRosse, James M. Gold, George Foussias, Anil K. Malhotra, Robert W. Buchanan, Aristotle N. Voineskos Background. Relationships among white matter microstructure, and neurocognition and social cognition, are not well-characterized. We examine doubly-multivariate associations among these two feature sets, in healthy individuals and those with a schizophrenia spectrum disorder (SSD). Methods. We derived two samples from the ‘Social Processes in Neurobiology of the Schizophrenia(s) (SPINS)’ study. The test sample (n=135, 89 SSD) was scanned on a GE Discovery; the replication sample (n=173, 91 SSD) on harmonized Siemens PRISMAs. We performed canonical correlation analysis, with fractional anisotropy estimates in the predictor set (mainly association and interhemispheric tracts), and neurocognitive and social cognitive domain performance scores in the criterion set. Results. In both samples, we observed a high canonical correlation between predictor and criterion (RcTEST=.71; RcREPLICATION=.72), which permutation testing showed to be significant (T2TEST=3.65, p=.044; T2REPLICATION=3.33, p<.001), and driven by the first canonical variate (pTEST=.02; pREPLICATION<.001). The body of the corpus callosum and the right uncinate fasciculus contributed to the predictor set in both samples; additionally, the left inferior longitudinal fasciculus contributed in the test sample, and the right arcuate and left uncinate fasciculi in the replication sample. The criterion set showed contributions from several neurocognitive domains, most prominently processing speed and verbal and visual learning, and higher-order social cognition, in both samples. Conclusion. We found similar patterns among white matter microstructure, and neurocognition and social cognition, in two samples. Further, our results underscore that diagnosis (HC or SSD) does not determine multivariate patterns of microstructure and cognition.
  • Connectomics, Faranak Heidari: Epilepsy is a neurological disorder that affects 0.6 percent of Canadians. Approximately, 30 percent of epilepsy patients are drug-resistant. To achieve seizure freedom, surgery and removal of a brain area is an option if a single area that causes the seizures can be identified. However, one third of the surgeries fail to lead to seizure freedom outcomes. Identifying the “seizure-onset zone” typically requires invasive or non-invasive electrical recordings of the brain. A fundamental understanding of the mechanisms behind seizure spread, captured in the electrical recordings, and their link to the brain anatomy is still missing. This novel study addresses how seizure spread in the brain is determined by the connections of the anatomy. Recently, a novel theoretical framework has been introduced for decomposing white matter connections into their spatial frequency content, called connectome harmonics. The repertoire of the connectome harmonics changes with the changes in the brain states over time. In our study, we will investigate the relation of the individual’s specific connectome harmonics, extracted from Diffusion Tensor Imaging (DTI) data, to the patterns of seizure propagation, obtained from intracranial electroencephalography (iEEG). This research will be the first of its kind in epilepsy literature. Hypothesis: An individual’s specific connectome harmonics govern the pattern of seizure propagation. A theoretical understanding of the brain activity patterns during seizures may give clinicians objective tools to identify candidate areas for surgery more effectively. Ultimately, my goal is to advance therapeutic options for a better quality of life for epilepsy patients.
  • Connectomics, Peter Bedford: Changes in effective connectivity after acute administration of lysergic acid diethylamide (LSD) have recently been examined to study the neural mechanisms underlying the effects of psychedelics on perception and cognition. Here, we investigate changes in whole-brain effective connectivity during rest in LSD compared to placebo for the first time in a fully exploratory manner. Analysis of resting-state fMRI data over whole-brain networks has been facilitated by the application of the recently introduced regression dynamic causal modelling (rDCM) framework. The training of a random forest classifier on the parameters obtained by fitting rDCM to the fMRI data allows us to examine whether effective connectivity patterns can discriminate between LSD and placebo states. Furthermore, this approach provides a ranking of the connections most impacted by LSD administration. We found that the effective connectivity parameters can be classified into the conditions with a balanced accuracy of 92%, and that the most significant connections involved visual areas including the occipital poles and sensorimotor networks, such as connections between the pre- and postcentral gyri and the cerebellum. Our results also confirmed the previously reported modulation of cortico-thalamo-striato-cortical feedback loops following LSD. These results broaden the scope of our understanding of the mechanisms of LSD. In the future, our method will be applied to study whether subject-specific effective connectivity patterns can predict subjective states under LSD at baseline and prior to administration.
  • Connectomics, Rejwan Salih: In the 80s, it was shown that neural networks operating on local learning rules (e.g. Hebbian) were vastly outperformed by those using non-local learning rules (e.g. backpropagation) in terms of memory storage capacity. However, this was through analysing networks of binary units, a great simplification of neural activity. What if we used threshold-linear/ReLU units? Analytically, it has been shown that associative networks of such units can surpass the Gardner bound. Moreover, beyond the memory capacity of a given associative network, the dynamics of the system in retrieving memories shows a qualitative change to not being able to retrieve any memory. This transition to no-retrieval can be characterised by either a first or second-order phase transition. However, we do not have a complete understanding of the relationship between key parameters of such a system (e.g. connectivity, pattern distribution, etc) and the corresponding phase transition. In this presentation, I will show the preliminary results obtained from simulating associative networks with different key parameter choices to assess the corresponding phase transition order.
  • Connectomics, Yohan Yee: Correlations between volumes of brain regions (termed “structural covariance”) have been observed in the brain, and are thought to arise from neuronal connectivity (“the connectivity hypothesis”). It is known that neuronal activation within a region can induce volume changes detectable through MRI; the connectivity hypothesis posits that coordinated activation over a structural backbone of connected regions results in the volume covariation patterns that are seen. Indeed, many neuroimaging studies use structural covariance as a proxy for wiring in the absence of data on connectivity. Here, using >600 high-resolution 3D mouse brain structural images (obtained via 7T MRI) in conjunction with datasets on gene expression (in-situ hybridization) and connectivity (viral tracing), I construct structural covariance networks and show that: 1) in normal mice, while there is an association between structural covariance and neuronal connectivity, patterns of coordinated gene expression better explain structural covariance, 2) using genetically modified mice that lack certain white matter tracts, the absence of brain connectivity does not decrease structural covariance, contrary to the connectivity hypothesis, and 3) using longitudinally-imaged mice, structural covariance is already present by birth and decreases over postnatal life, suggesting that postnatal plasticity acts on brain regions in an independent (rather than a coordinated) manner. By connecting imaging data across spatial scales—from macroscale structure through the mesoscale connectome to gene expression—I make the case that it is the intrinsic coordinated development of brain regions, rather than their interconnections, that drives the formation of the large-scale structural correlation patterns seen in the brain.
  • Micro-level Modeling, Carter Johnson: Astrocytes are a type of brain cell that play a number of important modulatory roles. In many brain areas, astrocytes can partially wrap around synapses to form “tripartite synapses” (presynaptic neuron - astrocyte - postsynaptic neuron). This wrapping allows the astrocyte to interface directly with the synaptic cleft and modulate the synaptic signal in a number of ways. In this work we explore one such pathway of astrocyte-neuron interaction. Namely, we study how the astrocyte calcium activity can affect the excitability of the postsynaptic neuron by altering the extracellular concentrations of different ion species. We present a model of the astrocyte that includes biologically-constrained key transmembrane potassium, calcium, sodium and glutamate fluxes, and show their implications for excitability of nearby neurons. We also discuss the challenges in selecting and adapting the correct channel and transporter models from the literature.
  • Micro-level Modeling, Giulio Bonifazi: Title: Alteration of astrocytic glutamate transporters drive tristability at the onset of Alzheimer’s disease. Text: At Alzheimer’s disease (AD) onset, accumulation of amyloid-β (Aβ) correlates with excitotoxicity and alteration of glutamate uptake. Oligomeric Aβ in mouse cultures modifies the expression of astrocytic GLT1 transporters, which remove most of the extracellular glutamate, preventing excitotoxicity. In this regard, we consider how extracellular Aβ modifies GLT1 expression and how it impacts glutamate time course in the peri-synaptic space. Accordingly, we develop a mathematical model for glutamate diffusion and uptake by astrocytic transporters. Since extracellular glutamate and Aβ both modulate and depend on calcium homeostasis and firing properties of the tissue, we include them in our model to estimate the conditions for excitotoxicity. Therefore, we scale-up our description to a tissue level, and we consider the dynamics of the average firing rate, glutamate, Aβ, and intracellular calcium concentration. Our model predicts that when Aβ lowers GLT1 concentration below a threshold, the accumulation of extracellular glutamate increases. This promotes a positive feedback loop that induces further synaptic glutamate release and thereby excitotoxicity. When including calcium and firing dynamics, changes in astrocytic glutamate uptake and calcium feedback on the firing activity result in a third and intermediate state: the asymptomatic stage of the disease that could degenerate into pathology, or reverse into a healthy brain. These results provide theoretical support to the pivotal role of Aβ in triggering excitotoxicity by perturbing neuronal activity, glutamate, and calcium homeostasis. Furthermore, we can foresee the idea of Alzheimer’s as a multistage disease, where transitions are driven by Aβ.
  • Micro-level Modeling, Pascal Helson: Long-term plasticity is much slower than the neural dynamics. There is a need to understand how to bridge this timescale gap. Although this timescale difference has already been exploited using slow-fast analysis on models of spiking neural network with STDP, the long-time behaviour of the limit model remains unclear. In particular, to the best of our knowledge, we do not know explicit conditions under which the synaptic weights converge without forcing them to be bounded. In order to tackle this problem, we opted for a simple (but rich enough to reproduce many neural network biological features) neural dynamics – the stochastic Wilson-Cowan model – in which we implemented a new stochastic STDP rule. Contrary to the previous models, the synaptic weights are not bounded and according to biology, they are discrete. A rigorous timescale separation leads us to a Markov chain as limit model for synaptic weights. The Markov chain transitions depend on the neural dynamics stationary distribution. We then study the long-time behaviour of this reduced model. We obtain conditions for which the synaptic weights converge or diverge. They rely on the limit Markov chain transitions. Despite the difficulty of computing the neural dynamics stationary distribution, we obtain these transitions using the Laplace transform of that distribution. This method works in a quite broad framework that we describe. We illustrate these conditions on the initial model, see Figure 3.5 in [1]. [1] P. Helson. Plasticity in networks of spiking neurons in interaction. Université Côte d’Azur, 2021. https://hal.archives-ouvertes.fr/tel-03200979/

05 - Past & Future

Previous Years

CCNS 2020

The full program of talks at CCNSv1, including keynote addresses from Dr. Nancy Kopell, Dr. Dimitris Pinotsis, Dr. Cameron McIntyre, and Dr. Philip Corlett, can be found here.

Future Plans

CCNS will return! Check this page, or follow the organizing committee on Twitter (@RichCompNeuro, @cognemo_andreea, @neurodidact, and @MLankarany) for the latest updates.