Have you ever considered what would happen if we fed AI artificially-generated material instead of human-created ones? Here’s a study’s findings on that front.
Artificial Intelligence algorithms have made great advances in producing images and text. Up until this point, these AI programs had been trained on human-made data only; but have you ever wondered what would happen if they were trained solely from their own data? Well it turns out several researchers at Rice University and Stanford University wondered the same thing!
Above: Adobe Firefly Beta AI image depicting robot typing on keyboard – intended only for personal, not commercial use.
As AI continues to generate images online, these will inevitably feed back into future AI models – they will become trained on synthetic data produced by themselves! This would create a self-sustaining cycle of artificial intelligence.
Researchers refer to this phenomenon as an autophagous (self-consuming) loop. When this happens, biases and artifacts may increase substantially – the more AI feeds on itself for information, the more likely biases and artifacts will emerge.
Reading this reminded me of how genetic inbreeding among royal families often resulted in increased susceptibility to birth defects, DNA mutations or reduced intelligence – all due to a lack of genetic diversity within royal bloodlines. Such gene-damage could not only have detrimental health implications but could be genetically dangerous as well.
Diversity
Diverse datasets prove that diversity can be an asset. Even over multiple generations, quality and diversity remain intact in generative models developed from them.
Above: Cows in front of an amusement park in India photographed with Nikon D70 and Fuji Velvia Realia 35mm film.
MADness
The team identified Model Autophagy Disorder (MAD). As its name implies, MAD was also intended as a nod towards Mad Cow Disease. Their conclusion?
“Without enough fresh real data at each generation in an autophagous loop, future generative models will find their quality (precision) or diversity (recall) steadily diminishing.”
Self-Consuming Generative Models Are MAD
Society thrives when its diverse pool of people, ideas, concepts, approaches, DNA, and more flourish together; similarly this appears to be true of AI models as well.
Above: Adobe Firefly Beta AI image of cows in field, not for commercial use.
Telltale Signs of AI-Generated Art
Many of us can recognize some telltale signs in current AI-generated images, most obviously the fingers. AI models still struggle with how best to render the complex detail of human fingers such as where and how long their digits should be, sometimes even the number on one hand.
AI-generated art often has an almost “fantasy” quality to many of the images it creates – even when meant to be photorealistic.
But the photo still presents challenges – such as letters, piano keys and “dead-looking eyes” — as well as intentionality of different elements within it.
Likewise, AI models fed an endless stream of their own images could breed artifacts such as these, leading to further proliferation and even possible amplified manifestation.
Above: Adobe Firefly Beta AI robot typing on keyboard for personal, not commercial use.
Additional thoughts
Although my discussion so far has focused mainly on images, MAD can also involve text- or video-based models.
Note that this paper has yet to be peer-reviewed; being a new study and all. One should wait for its review by independent bodies as well as replication tests of the findings across different AI models before reaching any definitive conclusions about its accuracy or usefulness.
Above: Adobe Firefly Beta AI image featuring scary robots in post-apocalyptic world. Not suitable for commercial use.
However, this paper should make us question how useful AI models truly are without human input. As this study suggests, their efficiency may not be so great and may provide some relief for those concerned that artificial intelligence will become sentient and begin turning against us like SkyNet did.
Above: Night selfie taken against a post-apocalyptic scene at night, using real long exposure photography. However, night photography often causes people to assume it has been “Photoshopped” or otherwise altered after viewing it; especially with AI-generated images now making more people think night photos are created simply by typing key prompts instead of skill and time put into their creation.