text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
 
Discoveries
design element
Discoveries
Search Discoveries
About Discoveries
Discoveries by Research Area
Arctic & Antarctic
Astronomy & Space
Biology
Chemistry & Materials
Computing
Earth & Environment
Education
Engineering
Mathematics
Nanoscience
People & Society
Physics
 


Discovery
Image Building

New computer technology mines photo databases for missing imagery

Images of original, input, scene matches, and output.

A powerful image manipulation algorithm completes images with missing regions.
Credit and Larger Version

December 18, 2008

Taking inspiration from Google, a team of researchers funded by the National Science Foundation developed a powerful new algorithm that searches large collections of images located on the World Wide Web to create novel imagery or fill in missing information in existing photographs.

The algorithm uses a dataset of 2.3 million photographs downloaded from a community sharing Web site to find good scene matches for a given image. The pixels from these matching photos are then used to fill in the hole in a seamless and "semantically valid" way.

"It's seamless because the human eye can't detect the manipulation and semantically valid because the borrowed pixels appear in context," said Alexei Efros, a computer scientist at Canegie-Mellon University. "A motorcycle wheel and a Ferris wheel have the same basic shape, but one can't be substituted for the other when completing an image."

Unlike existing technology that requires the algorithm to go through a long learning process with constant feedback loops to improve its decision making ability, the new technique is a large-scale data-driven search engine like Google that learns to select data the easy way.

"It searches everything, all 2 million photos to find images that look similar to the given image," Efros said. How successful is it? "Images completed using the technique fooled a focus group two-thirds of the time, while the best competing technique only fooled them one-third of the time."

The researchers believe their algorithm suggests a new way of using large image collections for "brute-force" solutions to many long-standing problems in computer graphics and computer vision.

"Our chief insight is that while the space of images is effectively infinite, the photos people take are actually not that diverse," Efros said. "So for many image completion tasks, we are able to find similar scenes that contain image fragments which will convincingly complete the image."

The algorithm is entirely data-driven, requiring no annotations or labeling by the user and unlike existing image completion methods, the algorithm can generate a diverse set of image completions and allow users to select among them.

The underlying large-scale data-driven search engine for the scene completion technique has a potential application beyond correcting damaged or deficient images. It could be used by the military or law enforcement to estimate where a photograph of a terrorist or a kidnap victim was taken.

"A human expert would be better, but the algorithm could give a rough first pass and narrow down the location," Efros said. "It would help focus the available resources where they need to be."

--  Diane E. Banegas, (703) 292-8070 dbanegas@nsf.gov

Investigators
Alexei Efros

Related Institutions/Organizations
Carnegie-Mellon University

Related Awards
#0541230 Data-Driven Appearance Transfer for Realistic Image Synthesis

Total Grants
$311,924

border=0/


Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Webmaster | SiteMap  
National Science Foundation
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel:  (703) 292-5111, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
December 18, 2008
Text Only


Last Updated: December 18, 2008