AI Project attributes
Other attributes
Glaze is an AI project from the University of Chicago aiming to protect human artists against generative AI style mimicry. Glaze uses machine learning algorithms to compute a set of minimal changes to artworks, such that it appears unchanged to human eyes but appears to AI models like a dramatically different art style. For example, Glaze could make changes to a piece of art that appears unchanged to the human eye, but an AI model classifies it completely differently. Users invoking the artwork or the artist in their prompt would then receive something different from what they expect.
Key properties of Glaze include the following:
- Image specific—the cloaks (changes to the artwork) needed to prevent AI from stealing an artwork's style are different for each image. Glaze's cloaking tool runs locally on the user's computer, calculating the cloak needed given the original image and the target style specified.
- Model agnostic—adding a single cloak to an image prevents different AI models (e.g., Midjourney, Stable Diffusion, etc.) from stealing its style. However, it is difficult to predict performance on new or proprietary models.
- Robust against removal—cloaks cannot be easily removed from the artwork (e.g., sharpening, blurring, denoising, downsampling, stripping of metadata, etc.).
- A stronger cloak leads to greater protection—Glaze can control how much the cloak modifies the original artwork, from introducing completely imperceptible changes to slightly more visible modifications. Larger modifications provide stronger protection against AI model's recreating the same style.
The underlying techniques used by Glaze's cloaking tool draw directly from the same properties as adversarial examples, a phenomenon where small changes to an input generate massive differences in how AI models classify it.
Glaze was developed by the SAND Lab (Security, Algorithms, Networking, and Data) at the University of Chicago. The two project leads are Shawn Shana and Prof. Ben Zhao. The project is funded by research grants, ensuring the tools are free for artists to use. A paper describing Glaze called "Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models" was first submitted on February 8, 2023. In August 2023, this paper was presented at the USENIX Security Symposium in Anaheim, California, winning the distinguished paper award and USENIX Internet Defense Prize 2023. Glaze was initially released on March 15, 2023. After the release, the team behind Glaze worked to make the tool more accessible, especially for artists who do not have access to powerful computers with GPUs. In August 2023, WebGlaze was released, a web service that artists can run on their phone, tablet, or any device with a browser.
On October 20, 2023, the same team submitted a paper describing their follow-up project called Nightshade. The successor uses a prompt-specific poisoning attack where poison samples look visually identical to benign images with matching text prompts.