By: BENJ EDWARDS
On Tuesday, members of the online community ArtStation began widely protesting AI-generated artwork by placing “No AI Art” images in their portfolios. By Wednesday, the protest images dominated ArtStation’s trending page. The artists seek to criticize the presence of AI-generated work on ArtStation and to potentially disrupt future AI models trained using artwork found on the site.
Early rumblings of the protest began on December 5 when Bulgarian artist Alexander Nanitchkov tweeted, “Current AI ‘art’ is created on the backs of hundreds of thousands of artists and photographers who made billions of images and spend time, love and dedication to have their work soullessly stolen and used by selfish people for profit without the slightest concept of ethics.”
Nanitchkov also posted a stark logo featuring the letters “AI” in white uppercase behind the circular strike-through symbol. Below, a caption reads “NO TO AI GENERATED IMAGES.” This logo soon spread on ArtStation and became the basis of many protest images currently on the site.
On December 9, criticism of AI art on ArtStation sped up when character artist Dan Eder tweeted, “Seeing AI art being featured on the main page of Artstation saddens me. I love playing with MJ as much as anyone else, but putting something that was generated using a prompt alongside artwork that took hundreds of hours and years of experience to make is beyond disrespectful.”
Four days later, a widely shared tweet from Zekuga Art promoted the protest further on Twitter, bringing larger awareness to the movement. As of press time on Wednesday, searching for “No AI Art” on ArtStation returned 2,099 results, and “no to AI generated images” returned 2,111 results. Each result represents a separate artist account.
By participating in the protest, some artists want to disrupt how Stable Diffusion training works, which led to several jokes on Twitter showing garbled AI-generated image results that some people took seriously. In reality, whatever ArtStation artwork Stable Diffusion currently draws upon was trained into the Stable Diffusion model long ago, and the protest will not have an immediate effect on images generated with AI models currently in use.
Later on Wednesday, ArtStation’s management responded to the protest with a FAQ called “Use of AI Software on ArtStation.” The FAQ states that AI-generated artwork on the site will not be banned and that the site plans to add tags “enabling artists to choose to explicitly allow or disallow the use of their art for (1) training non-commercial AI research, and (2) training commercial AI.”
The relationship between ArtStation and AI image synthesis dates back to the beta test of Stable Diffusion on its Discord server during the summer of 2022. Stable Diffusion is a popular open source image-synthesis model that creates novel images from text descriptions called prompts.
Soon after the Discord opened, people using Stable Diffusion discovered that adding “trending on ArtStation” to a prompt would almost magically add a distinctive digital art style to any image it generated. That’s because the creators of Stable Diffusion’s training dataset—the images that “taught” Stable Diffusion how to create images—included publicly accessible artwork scraped from the ArtStation website. (It did this scraping without artists’ permission, which is another key element of the debate over AI-generated artwork.)
Like “Greg Rutkowski,” the prompt text “trending on ArtStation” became an easy way to get high-quality results from almost any prompt, and the idea spread quickly among users of Stable Diffusion until it became something of a trope in the image-synthesis community.
In the long term, the popularity of “trending on ArtStation” in Stable Diffusion prompts will likely become a historical curiosity. Recent releases of Stable Diffusion 2.0 and 2.1 integrated a new way of processing text that means “trending on ArtStation” won’t work as a prompt anymore—but the underlying data from ArtStation was likely still included in the Stable Diffusion 2.x training dataset.
Text parsing changes aside, there’s still the open question of seeking consent when including an artist’s work in an AI training dataset.
On Wednesday, as the ArtStation protest reached a fever pitch, Stability AI and artist advocacy group Spawning announced that artists would be able to opt out of training for the upcoming Stable Diffusion 3.0 release by registering through the “Have I Been Trained?” website. Although, judging by the recent controversy on DeviantArt, some artists might argue that not being included (and having to manually opt in) should be the default state.