This content originally appeared on DEV Community and was authored by Mike Young
It’s hard for historians to pinpoint exactly when big stuff happens. People argue over when World War II really started, or if it was even separate from World War I or just a long continuation.
But this time, we can save historians a lot of spilled ink. It’s easy to see that something’s different because of what happened this week: Flux was released.
Flux is a big deal because it can generate images from text (like a ton of other models) but, importantly, it can also get the text in those images to look right on the first try. It also gets fingers and hands correct. We’re basically looking at a “solved problem” for generating realistic AI images. There are no longer any clear giveaways to a human observer scrolling on a feed.
None of this picture is real, but you could easily slip it in as a reddit verification post and skate past a mod.
The text part is a bigger deal than you might think. Text was a dead giveaway for fakes before, and in real life, text is often used to validate the authenticity of images. OSINT people use it to find real-world locations, Reddit mods use it for user verification, etc. Plus it occurs naturally in backgrounds and on people’s clothes.
If there’s a spectrum of believability, and an image with just a smooth-faced model with no visible hands falls closer to “hm that might be fake” than one with realistic text, background etc., then we’ve made a big jump towards the real side. I don’t think it’s that these images are undetectable as fakes — a computer can probably still spot the telltale signs of an AI-generated pic. But rather I think it matters that we’ve tipped past the point on the spectrum where a reasonable person could optically verify if an image is real or not without investing a ton of time into where’s-waldo-ing for flaws.
Is it worth your time to figure out if this pic is fake or real? It’s fake, btw.
Couple this with some of the image-to-video, lip-syncing of audio, etc. capabilities, and deepfakes are now no longer state of the art but instead the new industry standard.
The projected cost curves for technology mean this stuff is getting easier and easier exponentially. Text from bots on Twitter was one thing, but video and images were often used to verify authenticity, and now that won’t really be possible anymore.
We’ll have to use new methods to determine what’s real and what isn’t.
From Deedy
The volume of content that can be realistically faked on a limited budget is now unlimited. Think about it — before, you needed a studio, actors, and a decent budget to create convincing fake content. Now? A laptop and some prompts will do the trick. This democratization of fake content creation is going to flood the internet with indistinguishable un-reality that can convincingly claim to be real.
A hot take: I think you’ll be falling behind as a brand if you DON’T do this stuff because your competitors will be able to. The collective action problem will FORCE people to produce fake content in bulk. It’s not just about staying competitive. It’s about survival in the digital space. Imagine your competitor pumping out hyper-realistic product demos, customer testimonials, and ads at a fraction of the cost and time it takes you to produce real ones. You either jump on the AI bandwagon or get left in the dust.
This will seriously screw up social media platforms. They’ll have to transition. We won’t be able to go back to the way they are today. I’ll even go further and posit that you’re familiar with is now “legacy,” and we might actually look back on it nostalgically in a few years. Think about how platforms like Instagram and TikTok rely on “authentic” content. What happens when every post could be AI-generated? The whole concept of influencer marketing could implode (maybe that’s not a bad thing?). Verification systems will need a complete overhaul. The fabric of social interaction online is going to change — is changing under our feet right now!
It’s sort of like before Facebook introduced a global timeline and used to just be walls you could write on, versus a feed of junk from people you don’t follow driven by an algorithm. But this shift is going to be way more drastic. We’re not just changing how we consume content, we’re changing what content even means. The line between reality and fiction online is about to become so blurred it might as well not exist.
I want to emphasize: this is happening now. I think it started a few days ago and will take a few weeks to pick up steam, but it’s rolling. We’re already down the rabbit hole, and there’s no climbing back out. The internet as we know it just changed forever, and we’re all along for the ride whether we like it or not.
The normies don’t know about this yet, but you’ve been warned.
Tell your friends :)
By the way… if you want to find big AI developments like this from the thousands of models and papers published each day, sign up now. Or follow me on Twitter.
This content originally appeared on DEV Community and was authored by Mike Young
Mike Young | Sciencx (2024-08-15T19:51:50+00:00) Different now. Retrieved from https://www.scien.cx/2024/08/15/different-now/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.