Using visualization to see through fuzzy data

Published 15 November 2007

Finding method in the madness: DHS’s S&T Directorate supports efforts, building on Edward Tufte’s work, to use visualization to find patterns in and make sense of fuzzy data

After the 9/11 attacks, many complained that U.S. intelligence and law enforcement agencies failed to connect to dots: The dots — nuggets of relevant information — were available, and even on file, but no one was there to connect them in order to draw a coherent picture of a terror plot. The trouble was that whatever relevant information was there, it was buried in a mountain of data — indeed, a moving mountain, since large amounts of data are collected every day. In the United States, a day’s take would fill more than six million 160-gigabyte iPods. This is just the first problem. The second is this: Even if it were possible to sift through all the information in order to identify what pieces are relevant, there would no agreement among analysts regarding what this relevant information means, exactly. The third problem is that most nuggets are cast in unstructured, “fuzzy” data. Clues — or are they clues? — do not come neatly packaged in a tidy spreadsheet or searchable text. Clues, if this is what they are, must be inferred from photos, videos, voice.

In the last fifteen years, Edward Tufte — the New York Times called him “The Leonardo Da Vinci of Data” — catalogued ways to display data which were either structured (train schedules) or similar (death rates). Today, researchers at the DHS Science and Technology Directorate are creating ways to see fuzzy data as a 3-dimensional picture where threat clues can jump out. The field of visual analytics “takes Tufte’s work to the next generation,” says Dr. Joseph Kielman, Basic Research Lead for the Directorate’s Command, Control, and Interoperability Division. Kielman advises the National Visualization and Analytics Center, based at Pacific Northwest National Laboratory, and its university partners, called the regional centers. The centers’ interdisciplinary researchers are trying to automate how analysts recognize and rate potential threats. Mathematicians, logicians, and linguists make the collective universe of data assume a meaningful shape. They assign brightness, color, texture, and size to billions of known and apparent facts, and they create rules to integrate these values so threats stand out. For example, a day’s cache of video, cell phone calls, photos, bank records, chat rooms, and intercepted emails may take shape as a blue-gray cloud. If terror is afoot in L.A. and Boston, those cities are highlighted on a U.S. map. A month of static views might be animated as a “temporal” movie, where a swelling ridge reveals a growing threat.

“We’re not looking for ‘meaning,’ per se,” Kielman explains, “but for patterns that will let us detect the expected and discover the unexpected.” Neither the researchers nor the analysts, he says, need to understand the terrorists’ language — no small advantage, given the shortage of cleared linguists. Yes, it will be years before visual analytics can automatically puzzle out clues from fuzzy data such as video, cautions Kielman: “The pre-9/11 chatter didn’t say, ‘We’re going to plow airplanes into the Twin Towers.’ To correlate these facts, you must get relational,” connecting screen names with bank records, bank records with faces. How researchers will get there remains an unwritten story. With each chapter, though, the picture becomes less fuzzy.