Regardless of group and investor enthusiasm round visible generative AI, the output from such methods just isn’t at all times prepared for real-world utilization; one instance is that gen AI methods are inclined to output whole photos (or a collection of photos, within the case of video), somewhat than the particular person, remoted components which can be usually required for various purposes in multimedia, and for visible results practitioners.
A easy instance of that is clip-art designed to ‘float’ over whatever target background the user has selected:
Transparency of this kind has been commonly available for over thirty years; since the digital revolution of the early 1990s, users have been able to extract elements from video and images through an increasingly sophisticated series of toolsets and techniques.
For instance, the challenge of ‘dropping out’ blue-screen and green-screen backgrounds in video footage, once the purview of expensive chemical processes and optical printers (as well as hand-crafted mattes), would become the work of minutes in systems such as Adobe’s After Effects and Photoshop applications (among many other free and proprietary programs and systems).
Once an element has been isolated, an alpha channel (effectively a mask that obscures any non-relevant content) allows any element in the video to be effortlessly superimposed over new backgrounds, or composited together with other isolated elements.
Dropping Out
In computer vision, the creation of alpha channels falls within the aegis of semantic segmentation, with open source projects such as Meta’s Segment Anything providing a text-promptable method of isolating/extracting target objects, through semantically-enhanced object recognition.
The Segment Anything framework has been used in a wide range of visual effects extraction and isolation workflows, such as the Alpha-CLIP project.
There are many alternative semantic segmentation methods that can be adapted to the task of assigning alpha channels.
However, semantic segmentation relies on trained data which may not contain all the categories of object that are required to be extracted. Although models trained on very high volumes of data can enable a wider range of objects to be recognized (effectively becoming foundational models, or world models), they are nonetheless limited by the classes that they are trained to recognize most effectively.
In any case, semantic segmentation is just as much a post facto process as a green screen procedure, and must isolate elements without the advantage of a single swathe of background color that can be effectively recognized and removed.
For this reason, it has occasionally occurred to the user community that images and videos could be generated which actually contain green screen backgrounds that could be instantly removed via conventional methods.
Unfortunately, popular latent diffusion models such as Stable Diffusion often have some difficulty rendering a really vivid green screen. This is because the models’ training data does not typically contain a great many examples of this rather specialized scenario. Even when the system succeeds, the idea of ‘green’ tends to spread in an unwanted manner to the foreground subject, due to concept entanglement:
Despite the advanced methods in use, both the woman’s dress and the man’s tie (in the lower images seen above) would tend to ‘drop out’ along with the green background – a problem that hails back* to the days of photochemical emulsion dye removal in the 1970s and 1980s.
As ever, the shortcomings of a model can be overcome by throwing specific data at a problem, and devoting considerable training resources. Systems such as Stanford’s 2024 offering LayerDiffuse create a fine-tuned model capable of generating images with alpha channels:
Unfortunately, in addition to the considerable curation and training resources required for this approach, the dataset used for LayerDiffuse is not publicly available, restricting the usage of models trained on it. Even if this impediment did not exist, this approach is difficult to customize or develop for specific use cases.
A little later in 2024, Adobe Research collaborated with Stonybrook University to produce MAGICK, an AI extraction approach trained on custom-made diffusion images.
150,000 extracted, AI-generated objects were used to train MAGICK, so that the system would develop an intuitive understanding of extraction:
This dataset, as the source paper states, was very difficult to generate for the aforementioned reason – that diffusion methods have difficulty creating solid keyable swathes of color. Therefore, manual selection of the generated mattes was necessary.
This logistic bottleneck once again leads to a system that cannot be easily developed or customized, but rather must be used within its initially-trained range of capability.
TKG-DM – ‘Native’ Chroma Extraction for a Latent Diffusion Model
A new collaboration between German and Japanese researchers has proposed an alternative to such trained methods, capable – the paper states – of obtaining better results than the above-mentioned methods, without the need to train on specially-curated datasets.
The new method approaches the problem at the generation level, by optimizing the random noise from which an image is generated in a latent diffusion model (LDM) such as Stable Diffusion.
The approach builds on a previous investigation into the color schema of a Stable Diffusion distribution, and is capable of producing background color of any kind, with less (or no) entanglement of the key background color into foreground content, compared to other methods.
The paper states:
‘Our extensive experiments demonstrate that TKG-DM improves FID and mask-FID scores by 33.7% and 35.9%, respectively.
‘Thus, our training-free model rivals fine-tuned models, offering an efficient and versatile solution for various visual content creation tasks that require precise foreground and background control. ‘
The new paper is titled TKG-DM: Training-free Chroma Key Content Generation Diffusion Model, and comes from seven researchers across Hosei University in Tokyo and RPTU Kaiserslautern-Landau & DFKI GmbH, in Kaiserslautern.
Method
The new approach extends the architecture of Stable Diffusion by conditioning the initial Gaussian noise through a channel mean shift (CMS), which produces noise patterns designed to encourage the desired background/foreground separation in the generated result.
CMS adjusts the mean of each color channel while maintaining the general development of the denoising process.
The authors explain:
‘To generate the foreground object on the chroma key background, we apply an init noise selection strategy that selectively combines the initial [noise] and the init color [noise] using a 2D Gaussian [mask].
‘This mask creates a gradual transition by preserving the original noise in the foreground region and applying the color-shifted noise to the background region.’
Self-attention and cross-attention are used to separate the two facets of the image (the chroma background and the foreground content). Self-attention helps with internal consistency of the foreground object, while cross-attention maintains fidelity to the text prompt. The paper points out that since background imagery is usually less detailed and emphasized in generations, its weaker influence is relatively easy to overcome and substitute with a swatch of pure color.
Data and Tests
TKG-DM was tested using Stable Diffusion V1.5 and Stable Diffusion SDXL. Images were generated at 512x512px and 1024x1024px, respectively.
Images were created using the DDIM scheduler native to Stable Diffusion, at a guidance scale of 7.5, with 50 denoising steps. The targeted background color was green, now the dominant dropout method.
The new approach was compared to DeepFloyd, under the settings used for MAGICK; to the fine-tuned low-rank diffusion model GreenBack LoRA; and also to the aforementioned LayerDiffuse.
For the data, 3000 images from the MAGICK dataset were used.
For metrics, the authors used Fréchet Inception Distance (FID) to assess foreground quality. They also developed a project-specific metric called m-FID, which uses the BiRefNet system to assess the quality of the resulting mask.
To test semantic alignment with the input prompts, the CLIP-Sentence (CLIP-S) and CLIP-Image (CLIP-I) methods were used. CLIP-S evaluates prompt fidelity, and CLIP-I the visual similarity to ground truth.
The authors assert that the results (visualized above and below, SD1.5 and SDXL, respectively) demonstrate that TKG-DM obtains superior results without prompt-engineering or the necessity to train or fine-tune a model.
They observe that with a prompt to incite a green background in the generated results, Stable Diffusion 1.5 has difficulty generating a clean background, while SDXL (though performing a little better) produces unstable light green tints liable to interfere with separation in a chroma process.
They further note that while LayerDiffuse generates well-separated backgrounds, it occasionally loses detail, such as precise numbers or letters, and the authors attribute this to limitations in the dataset. They add that mask generation also occasionally fails, leading to ‘uncut’ images.
For quantitative tests, though LayerDiffuse apparently has the advantage in SDXL for FID, the authors emphasize that this is the result of a specialized dataset that effectively constitutes a ‘baked’ and non-flexible product. As mentioned earlier, any objects or classes not covered in that dataset, or inadequately covered, may not perform as well, while further fine-tuning to accommodate novel classes presents the user with a curation and training burden.
The paper states:
‘DeepFloyd’s excessive FID, m-FID, and CLIP-I scores mirror its similarity to the bottom fact based mostly on DeepFloyd’s outputs. Nevertheless, this alignment offers it an inherent benefit, making it unsuitable as a good benchmark for picture high quality. Its decrease CLIP-S rating additional signifies weaker textual content alignment in comparison with different fashions.
Total, these outcomes underscore our mannequin’s skill to generate high-quality, text-aligned foregrounds with out fine-tuning, providing an environment friendly chroma key content material technology answer.’
Lastly, the researchers performed a person examine to judge immediate adherence throughout the assorted strategies. 100 members had been requested to guage 30 picture pairs from every methodology, with topics extracted utilizing BiRefNet and handbook refinements throughout all examples. The authors’ training-free strategy was most popular on this examine.
TKG-DM is suitable with the favored ControlNet third-party system for Steady Diffusion, and the authors contend that it produces superior outcomes to ControlNet’s native skill to realize this type of separation.
Conclusion
Maybe essentially the most notable takeaway from this new paper is the extent to which latent diffusion fashions are entangled, in distinction to the favored public notion that they will effortlessly separate sides of photos and movies when producing new content material.
The examine additional emphasizes the extent to which the analysis and hobbyist group has turned to fine-tuning as a put up facto repair for fashions’ shortcomings – an answer which can at all times tackle particular courses and kinds of object. In such a situation, a fine-tuned mannequin will both work very properly on a restricted variety of courses, or else work tolerably properly on a way more greater quantity of potential courses and objects, in line with greater quantities of knowledge within the coaching units.
Due to this fact it’s refreshing to see not less than one answer that doesn’t depend on such laborious and arguably disingenuous options.
Â
* Taking pictures the 1978 film Superman, actor Christopher Reeve was required to put on a turquoise Superman costume for blue-screen course of photographs, to keep away from the enduring blue costume being erased. The costume’s blue colour was later restored by way of color-grading.