The Foundational Model of Anatomy (FMA) is a comprehensive, symbolic representation (ontology) of anatomy ranging from the macroscopic to molecular level. This ontology provides the logical organization for the rest of our data.
Foundational Model Explorer (FME)- A web-browser for the FMA. Todd Detwiler, 2002-2005.
EmilyLite - A relation-centric interface for querying the FMA. (Requires Java Web Start.) Allows you to express queries using a graphical interface. Cameron Tom, Eugene Lam, Emily Chung, Todd Detwiler, 2003-2004.
OQAFMA - a database querying tool that provides efficient, flexible access to the FMA. Takes StruQL queries as input and returns XML-formatted results. Designed mostly for other computer programs to access the FMA. Peter Mork, 2002-2003.
2-D annotated images
Images from multiple sources that are often annotated by drawing regions of interest around structures. The regions are given names from the FMA. Various web-based applications organize and access these images. By keeping the images separate from the applications that use them the images can be re-used.Interactive Atlases - Atlases of 2-D annotated images, including many based on 3-D anatomic models. These are web based and publically accessible. Widely used in anatomy teaching, with on the order of 60,000 hits per day. Content includes the Thorax, the Brain, and several other regions. Scott Bradley, Kraig Eno, Jeff Prothero. Jim Brinkley, David Conley, John Sundsten, Kate Mulligan, Peter Ratiu, Cornelius Rosse.
Knowledge-based image retrieval - Use of the foundational model to retrieve annotated images from the thorax atlas. Jakob Skott and Therese Storheden, visiting CS MS students from Sweden, Fall-Winter, 1999-2000. Tutorial for use of the image retrieval applet. (Requires Java Web Start. )
Digital Anatomist Image Collection Manager - A web-based tool for anatomy teachers and others to upload images, possibly with annotations, to a central repository. The uploaded images are arranged in collections and subcollections, and indexed via various anatomically-relevant keywords. Collections may be viewed in several modes: 1) as an online slide show, 2) as database objects whose metadata properties may be edited, and 3) as interactive atlases with very similar behaviour to the interactive atlases noted above. Atlas mode also adds the ability to click on an image, and either see the resulting anatomical name in the FME, or initiate a search for addtional images annotated with that name. The goal is to combine image management with the interactive atlases so that anyone with the proper permissions can create online slide shows and atlases. Rex Jakobovits, Sal Ruiz, Kevin Hinshaw, 2000-2005.
3-D models and scenes
Using various in-house software packages we have created a set of 3-D graphical models of anatomical objects from serial image sections. The models are broken into "primitives", corresponding to structure parts in the FMA. The primitives can be thought of like letters in the alphabet. These letters can be composed into any number of "sentences" or scenes, the grammar of which are defined by the FMA.
Dynamic Scene Generator (DSG). This application allows a web user to create a scene by asking the FMA for various structures and their related structures. The DSG retrieves the corresponding model primitives from a 3-D model database, assembles the scene, colors the objects according to what type they are, renders the scene, and sends a snapshot back to the web user, who can then change the scene or view from a different angle. An tutorial shows how to build an example scene. The original scene generator was developed by Ben Wong while an undergraduate CS major at the UW. The project was continued by Evan Albright, as part of his EE MS project at UW.
Biolucida. A next generation Dymamic Scene Generator, written in Java and XJ3D, which generates dynamic VRML scenes for anatomy lessons. Biolucida is planned as a visualizer for integrated data mapped onto anatomical scenes. Wayne Warren.
Much of our recent work involves mapping data to the structural information represented in the FMA, 2-D images, and 3-D models. Part of this effort includes the creation of web-based local data management systems that can be part of larger information systems. Most of these were or are being developed as part of our Integrated Brain Project (IBP) or Biomedical Information ScienceTechnology Initiative (BISTI) projects.
The IBP Cortical Stimulation Map (CSM) database records surgical and image data from patients undergoing surgery for epilepsy. The data (which contain no patient identifiable information) include surgical cortical stimulation map data, functional and structural MRI, and demographic information needed for studies involving the understanding of language organization in the brain. This system also controls the workflow of separate applications used for mapping these data to a 3-D model of the patient's brain obtained from MRI, and annotating the data with names from the FMA. The CSM database was written in WIRM (Web Interfacing Repository Manager), a toolkit develop by Rex Jakobovits while working on the IBP.
The Eyelab database is another WIRM-based application, this time for managing mouse slit lab images from John Clark's lab, which is studying cataract formation. Christie Fong.
Although WIRM is powerful, it requires significant software development to create a new lab management system. CELO is a tool built on top of WIRM, which makes it easier for an individual biologist to create a web-based lab management system without the need for programming help. Sort of like Catalyst for biologists. Christie Fong.
SeedPod is a model-driven lab system building environment, which allows a user to use Protege, a rich knowledge modeling environment, to create a declarative model of the data domain, which also includes the desired appearance and behavior of a web application. A separate program then transforms this model to a Postgres relational database, after which a Web application interprets the transformed model to automatically create a web interface. Hao Li.
DXBrain. DXBrain is a lightweight distributed data integration system originally constructed for querying the UW Human Brain Project data network. The DXBrain system wraps a number of data sources, of various data models and/or storage formats, in web services which accept XQueries as input and return XML results. Another component of the system, the Distributed XQuery Processor (DXQP), provides an XQuery library which allows query snippets to be targetted to specific data source wrappers. The DXQP takes XQueries as input, breaks them down into the underlying targetted query snippets, sends these queries to the appropriate source wrappers, and then combines the source results into a unified XML result document. Additionally DXBrain provides a user interface allowing users to construct queries, save queries, execute queries, and view query results. Query results can be viewed as XML, HTML, CSV or by either of two visualization methods; superimposed on a 2D annotated brain map or within the MindSeer 3D brain visualizer. DXBrain utilizes both the Distributed Query Processor DXQP and the web service wrapper for XML data sources WIX as system components..
DXQP. As mentioned above, the Distributed XQuery Processor (DXQP), provides an XQuery library which allows query snippets to be targetted to specific data source wrappers. The DXQP takes XQueries as input, breaks them down into the underlying targetted query snippets, sends these queries to the appropriate source wrappers, and then combines the source results into a unified XML result document. The DXQP also facilitates local XQuery processing, supporting operations such as joining source result elements, filtering query results, and performing simple in-query analysis.
WIX. WIX (Web Interface for XQuery) is a simple service that allows users to submit an XQuery to query an XML document. The servlet interface for submitting XQueries is simple and efficient and can easily be called from any programming language, other instances of XQuery, or an HTML form.
Visualization of Integrated Data
Primarily as part of our IBP we have developed two web-based tools for visualization of data that have been mapped to a structural framework.
The Brain Visualizer applet uses the same underyling technology as the Dynamic Scene Generator, but in this case the structural 3-D models are created from MR images of the patients whose data are stored in the CSM database. Functional data mapped to the 3-D models primarily include fMRI and Cortical Stimulation Mapping. Andrew Poliakov.
MindSeer, also called BrainJ3D, is a next-generation brain visualizer, which is written in Java3D with hardware acceleration. It can run in both standalone and client-server mode. Input data are described by an XML workspace file that will eventually be generated by the output of the XBrain application. Currently this file is manually generated. The demo shows this application running in client-server mode. Requires Java WebStart. Eider Moore.