Biological and Artificial Intelligence

Neuro140 | Neuro 240

CLASS NOTES, SLIDES, READING MATERIAL, AND VIDEO

01/26/2021 Gabriel Kreiman Introduction to biological and artificial intelligence

Are you ready to create suprahuman intelligence? Do you want to understand how intelligent computations emerge from the neuronal orchestra in the brain? How can we leverage millions of years of evolution to transform engineering systems that perform intelligent computations? How concerned should we be about the singularity? How is information represented and transformed in biological circuits? How can machines learn?

Slides Reading Video (ID required)
02/02/2021 Thomas Serre Deep networks: the good, the bad, and the ugly

In this lecture, I will critically assess recent progress toward achieving human-level visual intelligence. I will examine the implications of the successes and limitations of modern machine vision algorithms for biological vision. Highlighting our own work, I will discuss the prospect for neuroscience to inform the design of future artificial vision systems.

Slides Reading Video (ID required)
02/09/2021 Mackenzie Mathis

Using AI in the laboratory

In this lecture, I will discuss deep learning tools that utilize transfer learning for efficient use in neuroscience. While neural networks out-perform older algorithms at a myriad of tasks, they are often data-hungry making them infeasible for small-scale applications. Thus, the need for networks that can learn from a small number of training examples is highly desirable for many laboratory applications. Due to the principle of transfer learning, researchers are able to now rapidly make tailored neural networks to both behavioral and neural data analysis. However, there is still a gap to close for applying these networks to out-of-domain data. In this lecture, I will introduce the problem in relation to pose estimation. I will also introduce some resources on Google Colab for exploring these systems.

Slides Reading Video (ID required)
02/16/2021

Jan Drugowitsch

The Bayesian brain: ideal observer models for perceptual decisions

The aim of both biological and artificial intelligence is to find efficient solutions to hard problems. Ideal observer models use this parallel to formulate theories of how our nervous system achieves certain tasks based on how they have been approached by artificial intelligence research. A basic tenant of ideal observer models is that the world we inhabit is noisy and ambiguous. The ideal way to deal with the arising uncertainty is by Bayesian decision theory. I will introduce Bayesian decision theory and how it has been used to inquire about the cognitive and neural processes that underlie humans and animal behavior. Specific focus will be put on the study of perceptual decisions and the relation between the speed and accuracy of such decisions.

Slides Reading Video (ID required)   
02/23/2021 Demba Ba Interpretable AI in Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain

Sparse signal processing relies on the assumption that we can express data of interest as the superposition of a small number of elements from a typically very large set or dictionary. As a guiding principle, sparsity plays an important role in the physical principles that govern many systems, the brain in particular. Neuroscientists have demonstrated, for instance, that sparse dictionary learning applied to natural images explains early visual processing in the brain of mammals. In the field of computer science, it has become apparent in the last few years that sparsity also plays an important role in artificial neural networks (ANNs). The ReLU activation function, for instance, arises from an assumption of sparsity on the hidden layers of a neural network. The current picture points to an intimate link between sparsity, ANNs and the principles behind systems in many scientific fields. In the first part of this talk, I will show how to use sparse dictionary learning to design, in a principled fashion, ANNs for solving unsupervised pattern discovery and source separation problems in neuroscience. This approach leads to interpretable architectures with orders of magnitude fewer parameters than black-box ANNs, that can leverage more efficiently the speed and parallelism offered by GPUs for scalability. In the second part, I will introduce a deep generalization of a popular shallow sparse coding model from vision that makes predictions as to the principles of hierarchical sensory processing in the brain. I will make the case that sparse generative models of data, along with the deep ReLU networks associated with them, may provide a framework that utilizes deep learning, in conjunction with experiment, to elucidate the principles of hierarchical sensory processing.

 Slides Reading  
03/02/2021

Tomer Ullman

The development of intuitive physics and intuitive psychology

The central metaphor of cognitive science is that of the mind as a computer, but what sort of program is the mind running, and how does it construct this program? From an evolutionary perspective it would make sense to build in certain primitives and functions to allow the mind to get an 'early start' on understanding the world. These primitives would most helpfully be those that are generalizable across many scenarios, such as an understanding of people and things, agents and objects, psychology and physics. And indeed, we can see early evidence for an understanding of physics and psychology even in young children. In these two classes I will go review briefly the evidence for an early understanding of physics and psychology, what representations could account for that understanding, and how they may develop over time into adult representations.

Slides Reading Video (ID required, 2019 version)
03/09/2021

Cengiz Pehlevan

Learning through synaptic plasticity

Synaptic plasticity is widely accepted to be the mechanism behind learning in the brain. We will start by reviewing experimental results supporting this result. Then, we will introduce basic mathematical models of synaptic plasticity and demonstrate how neural networks can learn to perform complex computational tasks with biologically-plausible learning rules.

Slides Reading Video (ID required)
03/16/2021 No classes Spring Break
03/23/21 Haim Sompolinsky Introduction to Deep Networks in Brains and Machines Slides Reading Video (ID required)
03/30/2021

Sam Gershman

A unifying probabilistic view of reinforcement learning

Two important ideas about learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. I describe a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning. Each perspective captures a different aspect of learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own. The synthesis also helps resolve some puzzles about the role of dopamine.

Slides Reading Video (ID required)
04/06/2021 Lucas Janson Rigorously identifying important variables with machine learning

Machine learning provides state-of-the-art prediction in many fields including biology, but insights derived from the fitted models do not come with any statistical guarantees. I will discuss my work on statistical methods that can endow any machine learning algorithm with rigorous statistical guarantees (e.g., FDR control, p-values) for selecting important (in some cases, causal) variables.

 Slides Reading  
04/13/2021 Andrei Barbu

Language and vision enable flexible intelligence

We will present a research program that ties together action, language, and perception, with the long-term goal of understanding the flexibility of animal and human intelligence.  This broader view of intelligence allows us to revisit old problems in a new light; for example, we demonstrate how the grammatical structure of language can be acquired by visual observation. We will look at how robots can help shed light on the structure of semantic and episodic memory by how they learn to execute natural-language commands in the context of past, present, and future activities. In particular, we will show how the structure of language can shape inference and reasoning for robots and demonstrate a new end-to-end model that combines the strengths of deep learning with symbolic structures. We will also discuss the current state of science in AI, the real vs predicted performance of object detectors, and how we should be collecting datasets and evaluating performance in the context of a new upcoming large-scale object recognition benchmark.

 Slides Reading Video (ID required)
04/20/2021 Lakshminarayanan Mahadevan Swarm intelligence  Slides Reading Video (ID required)
04/27/2021 Aude Oliva TBD  Slides Reading Video (ID required)

Back to Home Page

 


 
Top