Research Projects

SIGMA LAB

Sequential Information Gathering in Machines and Animals


Members of the lab are pursuing research projects associated with attention and gaze control, real-world scene perception, object and face recognition, spatial navigation, language comprehension and production, and the integration of vision, language, and action in complex environments. Methods include psychophysical and behavioral experiments in humans and insects, formal mathematical modeling and computer simulation, and development of algorithms for sequential behavior in robots.

Two research domains are highlighted below to provide a more complete feel for the flavor of the research in the SIGMA Lab.

Control of Gaze in Foveated Vision Systems

In virtually all vertebrates and invertebrates with well-developed eyes, the visual field is organized into a high-resolution center (the fovea) and a lower-resolution periphery (some species have multiple foveas). Foveal vision has certain computational advantages, especially that the brain is spared from having to handle high-quality information over the entire visual field, and can devote computational resources to restricted objects or locations. These same advantages are being sought in machine vision systems through the development of foveal cameras. Traditional machine vision systems use a camera with uniform resolution over the entire scene, which raises the problem of deciding which scene elements are important and which are not. The problem, however, is that the brain (or the silicon chip) must control where the fovea is aimed, to inspect potentially interesting targets or to learn the relationships among objects in a scene. Animals do this by rotating their eyes (or their heads or bodies) to sweep the fovea over the scene. Controlling the movement of the fovea is a problem of sequential decision-making that humans and other organisms do quite automatically and (usually) unconsciously. It remains unclear how we decide where to look next, how long to look at particular elements of a scene, and when to look back at objects that we have already fixated. It is also unclear how to program a computer to control the gaze of a foveated camera. We are studying human eye movements with computer-controlled eye-tracking systems that allow us to manipulate visual scenes (displayed on a computer screen) while monitoring where (and for how long) people fix their gaze. We have also been studying artificial gaze control using a foveated vision system interfaced with a pan-tilt camera. Finally, a new project is looking at the interaction of bottom-up and top-down information in directing gaze during real-world scene perception in both human and artificial vision systems.

Spatial Navigation

Mobile organisms, and mobile robots, often face the task of setting courses among widely separated places in the environment. Because an animal's goal may not currently be in view, it cannot merely use the goal as a beacon. Instead it must choose a response to the current visual scene that will lead it toward the goal, and its eventually reward. Depending upon the distance of the goal, and the structure of the environment, this task may entail making a sequence of choices relative to each of several visual scenes along the way. As we know from observations of animals in natural environments and artificial mazes, many species can learn to travel complex routes entailing scores of behavioral decisions. Furthermore, they can learn to do this on their own, through their own exploration of the environment. Robots tend to be very bad at analogous tasks. As a model system for understanding the control of sequential decision making underlying navigation, we are studying honey bees as they learn to find food in simple maze-like environments. New studies currently underway expand the behavioral work to human navigation in large-scale environments. We have also used a mobile robot to study the development of new algorithms that mimic the flexibility of insect navigation.

Navigation and Spatial Cognition in Honey Bees

Most animals routinely face the challenge of orienting themselves in space, and studies of spatial orientation have long played an important role in psychology and behavioral biology. Honey bees are especially intriguing because they navigate over enormous distances with astonishing flexibility, though equipped with vision far feebler and brains far smaller than ours. Foraging honey bees travel up to 10 km from their nest in search of food, and so they frequently face the problem of steering a course to a familiar goal (e.g., a feeding site or the nest) that is not directly in view. Solving this problem requires the animal to use environmental features detectable at the starting point and along the way, which in turn requires that it be informed of the spatial relationship between these features and the route to the unseen goal. For insects and many vertebrates, landmarks and celestial cues (the sun and patterns of polarized sky light) provide the most important sources of navigational information. A major focus of SIGMA Lab research is on what bees learn about these two navigational references and how they learn it. The work has mainly addressed questions about the sensory and learning mechanisms underlying the behavior. However, the results have also led to new perspectives on the adaptive design of these mechanisms.


 


SIGMA LAB