this post was submitted on 11 Nov 2023
232 points (94.6% liked)

Asklemmy

43859 readers
1490 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I just listened to this AI generated audiobook and if it didn't say it was AI, I'd have thought it was human-made. It has different voices, dramatization, sound effects... The last I'd heard about this tech was a post saying Stephen Fry's voice was stolen and replicated by AI. But since then, nothing, even though it's clearly advanced incredibly fast. You'd expect more buzz for something that went from detectable as AI to indistinguishable from humans so quickly. How is it that no one is talking about AI generated audiobooks and their rapid improvement? This seems like a huge deal to me.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] simple@lemm.ee 118 points 1 year ago (1 children)

A lot of people just aren't aware of how fast AI is moving. AI voices were pretty meh earlier this year. A lot of people working on the audiobook/voice acting scene have been talking about this though.

[โ€“] driving_crooner@lemmy.eco.br 40 points 1 year ago (2 children)

I recommend everyone to check the YouTube channel "two minute papers" who have being doing videos about papers on AI for the last 10 years on so to see the accelerated progress AI have. Like 5 years ago those images generating AI looked like LSD infused dreams and now they look almost perfect.

[โ€“] Magrath@lemmy.ca 6 points 1 year ago (1 children)

I wish I could watch his videos but the way he talks is awful. It's like some exaggerated evolution of YouTube talk.

It's great to be alive!

[โ€“] mindbleach@sh.itjust.works 3 points 1 year ago (2 children)

I'm only shocked that video isn't better. Diffusion models work like denoising - so you'd figure all the wiggly nonsense between frames would be the first thing to filter out.

[โ€“] driving_crooner@lemmy.eco.br 3 points 1 year ago (1 children)

I give it a year, maybe two, for a fully synthetic video that couldn't not be easily distinguish from reality. There's already some very good AI that complete or replace backgrounds on videos that work really good, and completely synthetic videos that looks like nightmares for now.

I expected it to be here six months ago, but its continued absence hasn't changed my estimate from "any day now, and suddenly." All of this is so weirdly democratized (and pornography-motivated) that we're seeing the cool stuff before all the scary disinformation concerns.

And the underlying mechanisms are straight-up "the missile knows where it is, because it knows where it is not." Stable Diffusion compares the noise estimate with and without a particular term, takes the difference, and then leaps outward along that vector.

[โ€“] Turun@feddit.de 3 points 1 year ago (2 children)

I expect the data size to be a problem. Stable diffusion defaults to 512x512px, because it simply requires a lot of resources to generate an image. Even more so to train one. Now do that times 30 to generate even one second of video. I think we need something that scales better.

I fully expect this to work decently in a few years though, no matter how hard the challenge is, ai is moving really fast.

[โ€“] mindbleach@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

"Fisheye" generation seems obvious. Give the network a distorted view of an arbitrarily large image, where distant stuff scrunches inward toward a full-resolution point of focus. Predict only a small area - or even a single pixel. This would massively decrease the necessary network size, allowing faster training. (Or more likely, deeper networks). It'd also Hamburger Helper any size dataset by training on arbitrarily many spots within each image instead of swallowing the whole elephant.

Even without that, video only needs a few frames at a time. You want to predict a future frame from several past frames. You want to tween a frame in the middle of past and future frames. That's... pretty much it. Time-lapse "past frames" by sampling one per second, and you can predict the next second instead of the next frame. Then the stuff between can be tweened.

[โ€“] Hexarei@programming.dev 1 points 1 year ago (1 children)

Stable diffusion can do arbitrary sizes now, as long as you have the VRAM for it iirc

[โ€“] Turun@feddit.de 1 points 1 year ago

Of course, but that is precisely the problem. It gets expensive really really fast.