Taking Video Surveillance to New Lengths
When Christopher Jaynes, a professor of computer science, talks about his interest in wide-area video surveillance, he really does mean wide area"Miles, maybe an entire city, covered with perhaps 10,000 sensors," he says.
While an ongoing focus of traditional video surveillance research is how to effectively calibrate multiple cameras to view the same scene, Jaynes's interest in surveillance networks of such groundbreaking scope leads to a new research challenge: how his cameras will each see a different scene.
"With the wide-view surveillance I’m envisioning, there is no overlap. You can't do anything traditional. Now you have to be able to match objects as they move from one view to the next, so as you leave camera A, I have to be able to recognize you as you transition to camera B," Jaynes explains.
While Jaynes's research is still in preliminary stages, he's already been approached about possible applications for Homeland Security and military reconnaissancein which surveillance sensors could be deployed aerially over an entire city, or where planes might drop thousands of sensors over a battle zoneas well as, on a smaller scale, security systems for Las Vegas casinos.
With such massive scale, Jaynes's proposed surveillance system would be able to track "anomalous behavior exhibited over wide areas." So while traditional surveillance emphasizes identifying a single "bad" behavior in a single camera viewsay, a robbery at a gas stationJaynes's system could track undesirable or unusual behavior over miles. His case in point: Timothy McVey. While any single camera at a single location may have captured McVey in seemingly normal behavior on the day of the Murrah Federal Building explosion, a city-wide surveillance systemcapturing him buying fertilizer, then renting a truck, and then parking outside the buildingmay have been able to recognize a dangerous pattern to his activity.
Yet getting the system to recognize and red-flag such unusual behavior patterns will be no small feat. Supercomputers will be necessary to process the trillions of pixels of data recorded by the sensors every second. And the computers must be "taught" to filter through innocuous background movements and routine behaviors to detect and pinpoint in real-time only those movements that are suspecta task in which Jaynes and his team are making beginning strides.
"You have to build these big models of what is 'normal' by using techniques such as our adaptive frequency filters. In some sense, you're looking for things that don’t conform to the chaos you’ve seen before," says Jaynes. "Fundamentally, we want the computer to be able to find the regions that are salient to a scene automatically, and that’s a huge challenge."
About Christopher Jaynes
Christopher Jaynes, who came to UK in 1998, is director and founder of UK's Metaverse Lab, an interdisciplinary laboratory for media environments, at the Center for Visualization and Virtual Environments. He is also the center's director of research activity. Jaynes's interest in video surveillance technology extends to other non-security applications, including computer-assisted "intelligent environments" that would allow for human-computer interface without traditional keyboards or monitors.
Jaynes Research Team