ReedyBear's Blog

Parallels between AI video and AI text - it's all fake

I was trying to explain to a friend that 'AI Overview' and other AI text responses are not trustworthy, that the AI hallucinates. They said something like "I'm sure it's not just pulling it out of it's ass." It was hard to explain.

Then last night, they sent me a video of a bear jumping on a trampoline - except the whole video is AI generated. We discussed this, and how it is unsettling, and kind of obvious on a closer look.

And it clicked. AI Videos are generated the same way that AI text responses are generated. The Bear video is not a display of video that is in the AI's video archive. It is a generated approximation of what the video could be.

Similarly, when AI gives you text - it's not remembering or pulling from a knowledge-base. It is not understanding the topic and language. It's not regurgitating something it knows. It is generating text based on your prompt and the vast amounts of data it trained on.

While the generated images can be EXTREMELY convincing, sometimes perfect, and so can the generated videos, there's this inherent understanding that they're still fake, that the videos and images are not real, are not of the world, but are fiction created by a computer algorithm.

But when the generated content is text, when it is language, stated confidently, it gives the illusion of being factual, of being remembered, of being reliable.

Often times the text responses ARE accurate, ARE perfect. But other times, they are not. And in both cases, you're dealing with generation of text, not understanding of a topic, not human intelligence, not knowledge.

And AI can carry on convincing dialogue, that makes some wonder "Is it conscious?" But this too, is an illusion. It is generating dialogue based on inputs, based on prompts, based on mathematical algorithms. But it looks so much like the real thing, is so convincing, that it can trick us.

This fake dialogue convinces us the same way an ai-generated image convinces us. But with images and videos, we can often find the little things (the ball not moving right, the extra fingers, etc) that clue us in to how fake it is.

The text is much more convincing, but it's no more real, even when it's accurate.


P.S. AI overviews can undoubtedly be useful. Especially for topics that quite frankly don't matter, or that are easily validated - like if you want help troubleshooting a video game that's crashing. But this doesn't mean they should be trusted, especially for anything important, like legal briefs or fact-checking politicians.

P.P.S. The hallucinations and fake-ness are extremely apparent to me when I've used the A.I.s to generate code. Because they DO PRODUCE CODE, but often it doesn't work. But the AI always think its generated code is right. Because it doesn't actually understand code, it's just generating text.

P.P.P.S. While I would not trust AI for fact checking or research, I do think AI can be useful as a starting point, especially for a topic you're brand-new to. It can say some shit, then you can go do some actual research about the things it says.

#blog #featured #tech