Following up on an earlier post (https://lnkd.in/g5pcHDQX) regarding the physical inconsistency of reflections in AI-generated images, here I examine the veridicality of cast shadows. In any real-world scene, all lines connecting points on a cast shadow to their corresponding point on the object should intersect at a single point; this point corresponds to the projection of the light source in the image (see https://lnkd.in/gggUH3rx). Consider an outdoor scene like that shown below in which the illuminating light source is the sun. In this 3-D scene, lines connecting cast shadow to object are effectively parallel (because the sun is so distant). When this scene is imaged (assuming no lens distortion), these lines converge to a single point (a vanishing point of sorts). This constraint holds regardless of the geometry of the surface onto which the shadows are cast. As can be seen in the AI-generated image below, these shadow constraints are locally consistent for the central purple box (all three lines intersect at one point), but the shadows for the surrounding boxes are physically inconsistent with this part of the scene. This problem has persisted since the first version of DALL-E (see https://lnkd.in/gjzevDn9) through this image generated by Midjourney 6. In collaboration with James O'Brien, Eric Kee developed beautiful forensic analyses to analyze shadows and lighting in images and I’m delighted to see that a decade later these tools are still quite effective even in the age of generative AI.
Understanding the Real vs AI-Generated Images Discussion
Explore top LinkedIn content from expert professionals.
Summary
The discussion around "understanding the real vs. AI-generated images" highlights the growing difficulty in differentiating between photographs taken with a camera and those created by generative AI. This issue is fueled by visual inconsistencies in AI images, lack of universal watermarking standards, and the potential misuse in areas like media or elections.
- Study visual inconsistencies: Pay attention to details like shadows, reflections, and proportions, as these can often reveal whether an image is AI-generated or real.
- Advocate for watermarks: Support efforts for mandatory digital watermarks on AI-generated images to promote transparency and prevent misinformation.
- Stay cautious online: Question the authenticity of viral images and rely on trusted sources, especially during critical events or trending news.
-
-
I did a mini Linkedin experiment (https://lnkd.in/gc8gTUmr) and asked people to guess which design I created, versus which designs were AI-generated on Ideogram. 📌 TL;DR: 88 people responded and 49% guessed right. Several people (26%) believed that I created option 6. I hypothesize that this is due to the scripted typeface feeling more human and organic. Many participants referenced AI-generated images as having a blurry look, wonkiness, and visual inconsistencies. I agree! AI-generated images have a distinct style (I haven’t yet been able to land on a term that captures the tech-faux vibe). I think this will change over time, and what we can create will become more diverse. Those who assigned an AI-generated design as best referenced: ✤ Sophisticated ✤ Metaphorical (referencing Option 3 specifically) ✤ Fancy (drop shadow, 3D, etc) ✤ Unique Those who assigned my design as the best referenced: ✤ Elegant/tasteful ✤ Follows typography rules ✤ Conceptual/intentional ✤ Clean Roughly 1/3rd of participants voted the same for both (human-made and best), regardless of which option they voted for. This was interesting to me! When it comes to AI-generated images or art, is something perceived to be human-made also perceived to be better (at least for now)? Thank you to all who participated! 💫 #midjourney #ideogram #ai #artificialintelligence #generativeai #generativeart
-
Katy Perry’s mom just gave us a window into the future, and it’s pretty scary. During Monday’s Met Gala, a photo of Perry in an elaborate dress generated over 300,000 likes on social media and prompted kudos from her mom in a text message the singer shared online. Except she wasn’t even at the event - the photos were AI generated. It felt like worlds colliding watching this unfold, since our fashion ERP software is used by many of the designers behind the dresses at the Met Gala. And it was a chance for the world at large to see what nerds like me have been diving into for the last year. 1. AI isn’t going to be really good - it already is. The photo of Perry doesn’t have any of the cursed AI trademarks you might have seen in the past like hands with six fingers, or bad Photoshop signs like the infamous photo the Royal Family sent out recently. The fact that so many people shared it and Perry’s own mother fell for it is proof of how real it looks. Experts who ran the photo through the most popular AI-detector tools found it came back as “likely human.” I work with advanced AI tools every day and my antenna is way up for fake images, and what I saw didn’t strike me as fake. 2. We really do not have any system in place to prevent people from getting tricked by it. Social media sites flag AI-generated media one-by-one, and only after the horse is out of the barn and the photo has gone viral. AI photos of Rihanna and Lady Gaga (who also weren’t at the Met Gala) made the rounds on the night of the event, as well, and were only belatedly flagged - after racking up a combined 100,000 likes on Twitter. In theory, images generated using tools from a major AI company would have its own unique watermark, and social media sites can use code to automatically flag anything fake. But there are no laws requiring this. And because so many AI tools are open-source, bad actors could find a way to circumvent these watermarks by using their own tools. This will inevitably lead to a cat-and-mouse game between companies and bad actors, a process that sometimes ends (i.e. with jailbreaking iPhones) and sometimes goes on forever (like with companies and hackers). And while fake dresses aren't the end of the world, the technology is only going to be used more as it continues to improve at a rapid pace. My biggest fear is how it will be used in crucial events like the upcoming election. It’s a mistake to think we are in the early stages of AI trickery. We’re already there, and the biggest tests are yet to come.
-
Exclusive photos from Elon’s Uzbekistan’s visit 📸 Or are they? 🤔 Did he go to Uzbekistan? 🤔 Are these even real? 🤔 No. These aren't exclusive, Elon Musk didn't visit Uzbekistan recently, and these photos certainly aren’t real. These photos were made using Generative AI -> Midjourney V6! 😱 With the right prompting, and some patience, there is now marginal difference between a real photo and an AI generated one. Check out the motion blur around Elon’s hands for example. That is exactly what you expect in a real photo in low light! This has some serious implications on how we view photos. ▶ Previously, "fake" images of this quality would have required Photoshop and years of experience. ▶ Now, anyone can access this technology for $30 a month. ▶ Pretty soon, we’re not going to know what to believe anymore. ▶ Photos as a form of reliable evidence for the most part? Good luck. Welcome to the post-AI era everyone. Credit goes to KhikmatPulatov on Reddit for creating these! ----- 🔔 Follow for more AI content that spans best practices to business value. #midjourneyai #aiart #elonmusk
-
AI-generated content and images sometimes can be challenging to discern from real content and images. Last fall, President Joe Biden issued an executive order requiring a new set of government-led standards on watermarking AI-generated content. Like watermarks on photographs or paper money, digital watermarks help users distinguish between a real object and a fake one and determine who owns it. Just this week, OpenAI announced that they would automatically add watermarks to images generated by their DALL-E 3 image generator, supporting standards from the Coalition for Content Provenance and Authenticity (C2PA). C2PA, which includes companies like Adobe and Microsoft, has been advocating for a "Content Credentials" symbol to be added to AI-generated images to prove that they were artificially created vs. images directly created by humans. OpenAI will be adding the "Content Credentials" symbol to its DALL-E 3-generated images. But watermarks are not infallible. They can be removed either intentionally or accidentally, especially when an image is uploaded to social media (many social media companies remove image watermarks upon upload to their platforms). As many fellow Marketing leaders and their teams are starting to use AI-generated images in their Marketing materials, this presents a quandary: How do we note which images are AI-generated and which are not? Does a watermark help or not? What are your thoughts? (By the way, the chili pepper image accompanying this post is a purchased Stockphotos stock image that was AI-generated but not watermarked)