New satallite images identification technology

Published 19 June 2008

Researchers offer the first computerized method that can analyze a single photograph and determine where in the world the image likely was taken

Readers who followed the slow unveiling of details regarding the 6 September 2007 Israeli attack on a suspected Syrian nuclear facility would remember the questions raised about satellite images of the destroyed facility which were published in newspapers last fall. ISIS’s David Albright, a respected analyst of nuclear issues, was quoted by the Washington Post as saying that the images of the facility before it was bombed convinced him that it was meant to be a nuclear reactor (see, for example, HS Daily Wire of 24 October 2007). Skpetics asked the following question: Since the bombing was shrouded in total secrecy (until the CIA released some details to Congress in May), how could Albright know which facility it was which was destroyed? Know it, moreover, and offer specific pre-bombing images of it which would prove that it was a nuclear reactor in the making? Syria is not a small country, and its territory is dotted with military installations of many types and sizes. Some of these questions were undoubtedly raised by journalists who may have been envious of Albright for having scooped them; others darkly suggested that the U.S. intelligence community and Israel were selectively “feeding” satellite images to journalists and analysts in order the strengthen the case for the raid.

Thanks to researchers at Carnegie Mellon University, we may have fewer such questions in the future. These researchers have devised the first computerized method that can analyze a single photograph and determine where in the world the image likely was taken. It is a feat made possible by searching through millions of GPS-tagged images in the Flickr online photo collection. The IM2GPS algorithm developed by computer science graduate student James Hays and Alexei Efros, assistant professor of computer science and robotics, does not attempt to scan a photo for location clues, such as types of clothing, the language on street signs, or specific types of vegetation, as a person might do. Rather, it analyzes the composition of the photo, notes how textures and colors are distributed, and records the number and orientation of lines in the photo. It then searches Flickr for photos that are similar in appearance. “We’re not asking the computer to tell us what is depicted in the photo but to find other photos that look like it,” Efros said. “It was surprising to us how effective this approach proved to be. Who would have