Abstract
A goal of the University of Washington Brain Project is to develop software tools for processing, integrating and visualizing multimodality language data obtained at the time of neurosurgery, both for surgical planning and for the study of language organization in the brain. Data from a single patient consist of four magnetic resonance-based image volumes, showing anatomy, veins, arteries and functional activation (fMRI). The data also include the location, on the exposed cortical surface, of sites that were electrically stimulated for the presence of language. These five sources are mapped to a common MRbased neuroanatomical model, then visualized to gain a qualitative appreciation of their relationships, prior to quantitative analysis. These procedures are described and illustrated, with emphasis on the visualization of fMRI activation, which may be deep in the brain, with respect to surface-based stimulation sites. The advent of non-invasive functional imaging techniques allows language processing to be studied in living subjects. Our goal is to develop methods for integrating these and other forms of language data, and to organize them in an information system that can be federated with other Brain Project sites. Data integration involves two major steps: (1) integration of multimodality data from a single patient, the subject of this paper, and (2) integration of data from multiple patients, a much more difficult problem that is a major research objective of the entire Human Brain Project.