Visual Who

Professor Judith Donath
Dana Spiegel, danah boyd, Jonathan Goler

 

In the real world we continuously, though often peripherally, observe the social patterns around us. At a conference, for instance, we notice groups of people who always appear together: students whose common bond we recognize because of their shared sartorial eccentricities, groups of suited salesmen amidst a sea of t-shirt clad researchers. Such observations help us to make sense of the complex social world we live in. Similar patterns exist in the virtual world, whether it be communities of people in an online service, the employees of global corporation, or the electronically registered participants in a conference. Yet in the virtual world, such patterns are usually invisible, hidden within vast databases; they can be seen only if explicitly visualized. Visual Who depicts these social patterns and recreates – though in a very different way – the fascination we have with people watching in real life.

Visual Who is a tool for visualizing the complex relationships among a large group of items where each item is characterized by a set of attributes drawn from a large pool of possible attributes.Visual Who can be used to investigate a wide range of datasets. Our focus is on its social use -- as a way for members of a community to explore and come to understand the roles, ideas and histories that bring them together. Users choose anchors, each representing a topic relevant to the community, and place them on the screen. As they do so, the names of the community members rearrange themselves, showing who is especially drawn to certain ideas and which members share similar sets of interests.

For example, we have used Visual Who to depict the Media Lab community, using the Lab's mailing lists as the dataset. To explore this social space, the viewer chooses a mailing list to serve as an anchor, and places it on the Visual Who screen - the names of everyone who is drawn to this list then go streaming towards this anchor. The viewer then adds another mailing list as an anchor in another part of the screen - now the names of the community members are stretched between these two anchors. Adding a third anchor spreads the population between the three marked points, etc.. Each person's location shows their relative affinity with the given anchors and people with similar affinities are clustered together. Color shows membership in specific categories, in this case, subscription to a particular mailing list. The viewer can assign a color to any mailing list, thus highlighting various patterns in the community's structure. Depth and brightness indicate how strongly a particular item is being pulled by the given anchors: two items will be near each other if they share a similar proportional affinity to the given anchors, but one may be very strongly affiliated with them (and will thus appear bright and near the top layer) while the other is only distantly related (and will be further back and dimmer).

How it works: An anchor represents a category in the dataset (e.g. membership in a mailing list). When an anchor is placed on the screen, the program calculates a profile of the prototypical member of that category: it looks at everyone who is a member of that category and enumerates the other categories in which they are members. This profile is weighted, taking into account how many members of the chosen category are members of each of the other categories, as well as how exclusive are such memberships. The program then compares each item's category membership (in our example, each person's mailing list memberships) with the weighted prototype and attaches a spring from the person's name to the anchor, the strength of the string being proportional to their similiarity. Thus, when there are three anchors on the screen, each person has three virtual springs pulling them, with varying strengths, towards the three anchor points. For a detailed discussion of the Visual Who algorithm, see "Visual Who: Animating the affinities and activities of an electronic community", ACM Multimedia '95.

 
 
 

Laser Who

Professors Judith Donath & Joseph Paradiso
Dana Spiegel, Danah Boyd, Jonathan Goler,
Kai-yuh Hsiao, Chris Yang, Ari Adler,
Jeff Hayashida, Josh Strickon,
Ari Benbasat

 

LaserWho is an interactive, gesture-based visualization of the affiliations within a community. It combines the Sociable Media Group's Visual Who project with the Responsive Environments Group's Laser Wall. Users place anchors, each representing a different topic relevant to that community, on the screen. The names of people who are particularly drawn to a topic flow towards the associated anchor; the resulting animation reveals the structure of the community in terms of shared interests and affiliations.

Although LaserWho uses data visualization algorithms and techniques, the feel of this installation is very different from the typical analytic application. The image is shown on a large, rear-projection screen. All input is via natural, intuitive gestures of picking up and placing objects – an innovative laser ranging-finding system allows for fast, accurate hand-tracking. The immersive quality is further enhanced with sound: a musical composition that is driven by the changing state of the system accompanies this piece. Each anchor has its own theme and the changing state of the system shapes and modulates the composition.

LaserWho was first publicly exhibited in Venice, Italy in November 1999 as part of the Opera Totale 5 festival. In August, 2000, it was shown at the SIGGRAPH Emerging Technologies exhibit. For this exhibition, a database was built from 25 years of SIGGRAPH publications: the community members were paper authors and the topics were their papers' keywords. This ensured that the names and relationships that appeared in the installation were familiar and meaningful to the viewers.