First, the bad news: detecting images produced by artificial intelligence is really hard. As artificial intelligence models improve at a dizzying pace, telltale signs that were once seen as giveaways — twisted hands and jumbled text — are becoming increasingly rare.
Images created with popular tools like Midjourney, Stable Diffusion, DALL-E and Gemini are no longer visible. In fact, AI-generated images are starting to deceive people more, which creates major problems in spreading misinformation. The good news is that identifying AI-generated images is generally not impossible, but it requires more effort than before.
AI Image Detector – Proceed with caution
These tools use computer vision to examine pixel patterns and determine the likelihood of artificial intelligence-generated images. This means that the AI detector isn’t completely foolproof, but it’s a good way for the average person to determine whether an image is worthy of closer inspection – especially if it’s not immediately obvious.
“Unfortunately for the human eye, studies show that a person’s chance of getting it is about one in 50,” said Anatoly Kvitnitsky, CEO of the Artificial Intelligence Image Detection Platform. . Is it artificial intelligence?. “But for artificial intelligence detection of images, due to pixel-like patterns, these persist even as the models continue to get better.” Kvitnitsky claims that AI or Not achieves an average accuracy of 98%.
Other AI detectors with generally high success rates include Hive audit, SDXL detector About hugging faces and lighting. We tested ten AI-generated images on all these detectors to see how they performed.
Is it artificial intelligence?
Unlike other AI image detectors, AI or Not gives a simple “yes” or “no”, but it correctly indicates that the image was generated by AI. With the free plan, you get 10 uploads per month. We tried 10 images and the success rate was 80%.
The AI may not have correctly recognized this image as AI-generated.
Image source: Screenshot: Mashable / AI or not
Hive audit
We tried Hive Moderation’s free briefing tool, which included over 10 different images, and had an overall success rate of 90%, meaning they were most likely AI-generated. However, it failed to detect the artificial intelligence qualities of an artificial image of an army of chipmunks scaling a rock wall.
We’d love to believe that the Chipmunk Army is real, but the AI detector is wrong.
Image Source: Screenshot: Mashable/Hive Review
SDXL detector
The SDXL detector on Hugging Face takes a few seconds to load and you may encounter errors on your first try, but it’s completely free. It also gives the probability percentage. It is said that 70% of AI-generated images are likely to be generative AI.
SDXL detector correctly identifies Grok-2’s troublesome image of Barack Obama in public bathroom
Image Credit: Screenshot: Mashable / SDXL Detector
illumination
Illuminarty has a free plan that provides basic AI image detection. Of the 10 AI-generated images we uploaded, it classified only 50% as having a very low probability. What scares rodent biologists is the low chance that the infamous Dick the Rat image was produced by artificial intelligence.
Well, that looks like a layup.
Image Credit: Screenshot: Mashable/Illuminarty
As you can see, AI detectors are mostly pretty good, but they are not foolproof and should not be used as the only way to verify an image. Sometimes they were able to detect deceptive images generated by AI even if they looked real, and other times they made incorrect judgments about images that were clearly created by AI. This is exactly why a combined approach is best.
Other tips and tricks
ol’ reverse image search
Another way to detect AI-generated images is a simple reverse image search, a method recommended by Bamshad Mobasher, professor of computer science in the School of Computing and Digital Media at DePaul University in Chicago and director of the Center for Networked Intelligence. You can trace the source of an image by uploading it to Google Images or a reverse image search tool. If the photo shows an apparently real news event, “you might be able to determine that it’s fake or that the actual event didn’t happen,” Mobacher said.
Mix and match speed of light
Google’s “About this image” tool
Google Search also has an “About this image” feature that provides contextual information, such as when the image was first indexed and where else it appears on the web. This information can be found by clicking on the three dots icon in the upper right corner of the image.
Visible signs visible to the naked eye
Speaking of which, while AI-generated images are getting really good, it’s still worth looking for tell-tale signs. As mentioned above, you may still occasionally see hands in an image that are distorted, hair that looks a little too perfect, or text in an image that is garbled or makes no sense. Our segment from sister site PCMag suggests looking for blurry or distorted objects in the background, or subjects with flawless (and we mean poreless, flawless) skin.
At first glance, the mid-course photo below looks like a relative of the Kardashian family promoting a cookbook that could easily come from Instagram. But look further and you’ll see twisted sugar bowls, twisted knuckles and skin that’s a little too smooth.
At first glance, nothing is as it seems in this picture.
Image source: Mashable/Midjourney
“AI can be good at generating overall scenes, but the devil is in the details,” Sasha Luccioni, Hugging Face’s director of AI and climate, wrote in an email to Mashable. Look for “mostly small inconsistencies: extra fingers, Asymmetrical jewelry or facial features, incongruity of objects (extra handle on a teapot)”.
Mobasher, who is also a fellow at the Institute of Electrical and Electronics Engineers (IEEE), said to zoom in and look for “weird details” such as stray pixels and other inconsistencies, such as subtly mismatched earrings.
“You might find that one part of the same image with the same focus is blurry, but another part is very detailed,” Mobacher said. This is especially true in image backgrounds. “If you have text and similar backgrounds on your logo, a lot of times they end up being gibberish that sometimes doesn’t even sound like real language,” he added.
This image of Volkswagen vans parading on a beach was created with Google’s Imagen 3. But if you look closely, you will find that the letters that are supposed to be the Volkswagen logo on the third bus are just garbled characters, while the fourth bus has irregular spots.
We’re sure a VW bus parade happened at some point, but not like this.
Credit: Mashable/Google
Watch out for garbled signs and strange blobs.
Credit: Mashable/Google
It all depends on artificial intelligence literacy
None of the above methods will be as useful if you don’t first stop and think about whether what you’re seeing is generated by artificial intelligence when you use media, especially social media. Just as media literacy became a popular concept during the 2016 election when misinformation was rampant, AI literacy is the first line of defense in determining what is true.
AI researchers Duri Long and Brian Magerko define AI literacy as “a set of abilities that enable individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use it online, at home, and in the workplace Artificial intelligence as a tool.
Understanding how generative AI works and what to look for is key. “This may sound cliché, but taking the time to verify the source and provenance of the content you see on social media is a good start,” Lucchione said.
Start by asking yourself about the source of the image and the context in which it appears. Who posted this image? What does the accompanying text, if any, say about this? Did someone else or the media publish the image? How do the images or the words that accompany them make you feel? If it seems like it’s meant to irritate or seduce you, think about why.
How some organizations are dealing with AI deepfakes and misinformation
As we have seen, so far, the methods by which individuals can distinguish artificial intelligence images from real images are incomplete and limited. To make matters worse, the spread of illegal or harmful images generated by artificial intelligence is a double whammy, as these posts spread false information and in turn breed distrust of online media. But with the advent of generative AI, a number of measures are springing up to enhance trust and transparency.
The Content Provenance and Authenticity Alliance (C2PA) was founded by Adobe and Microsoft, and its members include technology companies such as OpenAI and Google, as well as media companies such as Reuters and the BBC. C2PA offers clickable Content Credentials Used to identify the source of images and whether they were generated by artificial intelligence. However, it is up to the creator to attach content credentials to the image.
On the other hand, Stanford University’s Starling Laboratory is working hard to verify the authenticity of real images. Starling Lab verifies “sensitive digital records, such as records of human rights violations, war crimes and genocide testimonies,” and securely stores verified digital images in a decentralized network so they cannot be tampered with. The lab’s work isn’t user-facing, but its library of projects is a great resource for those who want to validate images of things like the war in Ukraine or the transition from President Donald Trump to Joe Biden.
Experts often talk about AI images in the context of hoaxes and misinformation, but this is not the case with AI images always It’s meant to deceive. AI images are sometimes just jokes or memes removed from their original context, or they are lazy advertisements. Or maybe they’re just a form of creative expression with interesting new technology. But for better or worse, artificial intelligence imaging is now a reality. It’s up to you to detect them.
We’ve explained Smokey Bear here, but he’ll understand.
Credit: Mashable/xAI
theme
Artificial IntelligenceOpenAI