Using sports-TV technology to help the military in the field

like? Lance said that manned and unmanned sensors are gathering too much video and other intelligence data to be sifted through and analyzed manually.

Matthews writes that Lance’s answer is straightforward: use the same systems and technology employed by commercial broadcasters. Each day they create and successfully manage 30 times the volume of video and other digital content that the military struggles with, he said.

This is why the football fan sees high-definition video augmented with explanatory audio. Boxes pop onto the screen to display the score, the down and the time remaining in the quarter. Instant replays present touchdowns from multiple angles. Hand-drawn arrows show how the receiver eluded the defenders. For the viewer, the result is excellent real-time situational awareness, Lance said.

By contrast, U.S. troops typically receive just the video feed. It’s like watching a football game without all the extra information. “You don’t know what down it is, you don’t know what the score is. It’s just raw video,” Lance said.

Matthews writes that a major reason for the difference between what broadcasters can do and what the military can not is metadata — data that describes data. Video shot by a UAV, for instance, would include such metadata as the date it was shot, altitude, location, time, and possibly other data such as camera angles.

Such information is useful because it enables intelligence analysts to search, for example, for video from certain locations shot on certain dates or at certain times. “When the military runs an operation, there’s a lot of metadata they don’t capture,” Delay said.

For example, metadata tags could be added whenever the UAV spots a particular type of truck, or when people on the ground seem to be digging holes or carrying weapons. The tags would then lead intelligence analysts to those parts of the video.

Matthews quotes Delay to say that Harris has developed automated video analytics that can add metadata tags to video or audio recordings so that they can be quickly searched. An algorithm attaches a metadata tag to the video at points where the wanted truck appears. Another might record when the vehicle stops. Still other algorithms are designed to tag suspicious activity such as people digging holes alongside roads, or people carrying weapons, Delay said.

Once data are tagged with metadata, various data streams - video, audio, geo-spatial - can be examined and compared. “Event-based correlation” software can automatically tie together related intelligence gathered by different sensors at different times and places.

Thus, the truck seen near where a roadside bomb exploded might be picked out at other locations by an automated search of surveillance videos, possibly tipping U.S. troops to the whereabouts of the bombers, Delay said.

Right now, though, most military intelligence data are not well meta-tagged, he said. Harris’s FAME was tested during the U.S. military’s Empire Challenge exercises in 2008 and 2009. Matthews writes that in the 2008 test, FAME was praised by the National Geospatial Agency as promising commercial technology that enables the military to quickly merge multiple pieces of disparate intelligence data to produce a more complete intelligence picture.

During the 2009 exercise, Harris teamed with Lockheed Martin, combining Harris’ FAME with Lockheed’s Audacity video management system. After the exercise, the Joint Forces Command last fall awarded Lockheed a $29 million contract so that it, Harris and a third company, NetApp, can further develop the FAME-Audacity systems in a program known as Valiant Angel.

Delay said the new system should appear soon in Afghanistan and Iraq.