Visual Computing

Our research in visual computing lies at the intersection of visualization, computer graphics, and computer vision. It spans a wide range of topics, including bio-medical visualization, image and video analysis, 3D fabrication, and data science.

Our Research

Our goal is to combine interactive computer systems with the perceptual and cognitive power of human observers to solve practical problems in science and engineering. We are providing visual analysis tools and methods to help scientists and researchers better process and understand large, multi-dimensional data sets in various domains such as neuroscience, genomics, systems biology, astronomy, and medicine. And we are developing data-driven approaches for the acquisition, modeling, visualization, and fabrication of complex objects. 

Contact

Michaela Kapp
Administrative Manager of Research

33 Oxford Street
Maxwell Dworkin 143
Cambridge, MA 02138
Email: michaela@seas.harvard.edu
Office Phone: (617) 496-0964

Our Lab

Our group belongs to Harvard's School of Engineering and Applied Sciences and the Center for Brain Science. We are located in the Maxwell Dworkin Building (33 Oxford St.) as well as the Northwest Laboratory (52 Oxford St.) on Harvard's main campus in Cambridge, Massachusetts.

Recent Publications

Data-Driven Guides: Supporting Expressive Design for Information Graphics
Kim NW, Schweickart E, Liu Z, Dontcheva M, Li W, Popovic J, Pfister H. Data-Driven Guides: Supporting Expressive Design for Information Graphics [Internet]. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1. Publisher's Version

In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.

Screenit: Visual Analysis of Cellular Screens
Dinkla K, Strobelt H, Genest B, Reiling S, Borowsky M, Pfister H. Screenit: Visual Analysis of Cellular Screens. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1.

High-throughput and high-content screening enables large scale, cost-effective experiments in which cell cultures are exposed to a wide spectrum of drugs. The resulting multivariate data sets have a large but shallow hierarchical structure. The deepest level of this structure describes cells in terms of numeric features that are derived from image data. The subsequent level describes enveloping cell cultures in terms of imposed experiment conditions (exposure to drugs). We present Screenit, a visual analysis approach designed in close collaboration with screening experts. Screenit enables the navigation and analysis of multivariate data at multiple hierarchy levels and at multiple levels of detail. Screenit integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures. In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed to match workflows of screening experts. We demonstrate analyses for a real-world data set, CellMorph, with 6 million cells across 20,000 cell cultures.

booc.io: An Education System with Hierarchical Concept Maps
Schwab M, Strobelt H, Tompkin J, Fredericks C, Huff C, Higgins D, Strezhnev A, Komisarchik M, King G, Pfister H. booc.io: An Education System with Hierarchical Concept Maps. IEEE Transactions on Visualization and Computer Graphics (Inf 2017;PP(99):1-1.

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures
Gonda F, Kaynig V, Thouis R, Haehn D, Lichtman J, Parag T, Pfister H. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures [Internet]. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures 2016; arXivAbstract

We present an interactive approach to train a deep neural network pixel classifier for the segmentation of neuronal structures. An interactive training scheme reduces the extremely tedious manual annotation task that is typically required for deep networks to perform well on image segmentation problems. Our proposed method employs a feedback loop that captures sparse annotations using a graphical user interface, trains a deep neural network based on recent and past annotations, and displays the prediction output to users in almost real-time. Our implementation of the algorithm also allows multiple users to provide annotations in parallel and receive feedback from the same classifier. Quick feedback on classifier performance in an interactive setting enables users to identify and label examples that are more important than others for segmentation purposes. Our experiments show that an interactively-trained pixel classifier produces better region segmentation results on Electron Microscopy (EM) images than those generated by a network of the same architecture trained offline on exhaustive ground-truth labels.

More

@HarvardVCG at Twitter