Adobe’s AI prototype pastes objects into photos while adding realistic lighting and shadows

Every year at Adobe Max, Adobe shows off what it calls “Sneaks,” R&D projects that might — or might not — find their way into commercial products someday. This year is no exception, and lucky for us, we were given a preview ahead of the conference proper.

Project Clever Composites (as Adobe’s calling it) leverages AI for automatic image compositing. To be more specific, it automatically predicts an object’s scale, determining where the best place might be to insert it in an image before normalizing the object’s colors, estimating the lighting conditions and generating shadows in line with the image’s aesthetic.

Here’s how Adobe describes it:

Image composting lets you add yourself in to make it look like you were there. Or maybe you want to create a photo of yourself camping under a starry sky but only have images of the starry sky and yourself camping during the daytime.

I’m no Photoshop wizard, but Adobe tells me that compositing can be a heavily manual, tedious and time-consuming process. Normally, it involves finding a suitable image of an object or subject, carefully cutting the object or subject out of said image and editing its color, tone, scale and shadows to match its appearance with the rest of the scene into which it’s being pasted. Adobe’s prototype does away with this.

“We developed a more intelligent and automated technique for image object compositing with a new compositing-aware search technology,” Zhifei Zhang, an Adobe research engineer on the project, told TechCrunch via email. “Our compositing-aware search technology uses multiple deep learning models and millions of data points to determine semantic segmentation, compositing-aware search, scale-location prediction for object compositing, color and tone harmonization, lighting estimation, shadow generation and others.”

Adobe Clever Composites

Image Credits: Adobe

According to Zhang, each of the models powering the image-compositing system is trained independently for a specific task, like searching for objects consistent with a given image in terms of geometry and semantics. The system also leverages a separate, AI-based auto-compositing pipeline that takes care of predicting an object’s scale and location for compositing, tone normalization, lighting condition estimation and synthesizing shadows.

The result is a workflow that allows users to composite objects with just a few clicks, Zhang claims.

“Achieving automatic object compositing is challenging, as there are several components of the process that need to be composed. Our technology serves as the ‘glue’ as it allows all these components to work together,” Zhang said.

As with all Sneaks, the system could forever remain a tech demo. But Zhang, who believes it’d make a “great addition” to Photoshop and Lightroom, says work is already underway on an improved version that supports compositing 3D objects, not just 2D.

“We aim to make this common but difficult task of achieving realistic and clever composites for 2D and 3D completely drag-and-drop,” Zhang said. “This will be a game-changer for image compositing, as it makes it easier for those who work on image design and editing to create realistic images since they will now be able to search for an object to add, carefully cut out that object and edit the color, tone or scale of it with just a few clicks.”

Adobe’s AI prototype pastes objects into photos while adding realistic lighting and shadows by Kyle Wiggers originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter