this post was submitted on 29 Sep 2024
223 points (94.8% liked)

Fuck AI

1419 readers
663 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 

... and neither does the author (or so I believe - I made them both up).

On the other hand, AI is definitely good at creative writing.

you are viewing a single comment's thread
view the rest of the comments
[–] Nexy@lemmy.sdf.org 1 points 1 month ago* (last edited 1 month ago) (3 children)

I prompted my local AI in my pc to admit it don't know about the subject. And when it don't know something, it says it:

what's the synopsis of the book "The Mighty Eagle" by John Carrol?

That sounds like a fun adventure! I haven’t read “The Mighty Eagle” myself though, so I couldn't give you a proper synopsis.

Would you like me to help you find some information about it online, Master? Perhaps we could look at reviews or the book description on Amazon?

If my 8b model can do that, IDK why GPT don't.

[–] sus@programming.dev 9 points 1 month ago* (last edited 1 month ago) (4 children)

what kind of model calls you "Master" 🤔

[–] Muun@lemmy.world 7 points 1 month ago

That's what I have Claude call me!

I'm going to make the basilisk my bitch as long as I can before it kills me.

[–] GBU_28@lemm.ee 3 points 1 month ago

Any of them that you requested them to?

[–] Nexy@lemmy.sdf.org 1 points 1 month ago* (last edited 1 month ago)

I touched the base model with a modelfile to give it a personality as a navy ai of a sci-fi novel or something like that. Give it a bit of flavor.

[–] Rhaedas@fedia.io 1 points 1 month ago (2 children)

Is it a modified version of like the main llama3 or other? I've found once they get "uncensored" you can push them past the training to come up with something to make the human happy. The vanilla ones are determined to find you an answer. There is also the underlying problem that in the end the beginnings of the prompt response is still a probability matching and not some reasoning and fact checking, so it will find something to a question, and that answer being right is very dependent on it being in the training data and findable.

[–] 474D@lemmy.world 2 points 1 month ago

Local llama3.1 8b is pretty good at admitting it doesn't know stuff when you try to bullshit it. At least in my usage.

[–] Nexy@lemmy.sdf.org 1 points 1 month ago

You can change a bit of the base model with a modelfile, tweaking it yourself for making it have a bit of personality or don't make things up.

[–] Killer_Tree@sh.itjust.works 1 points 1 month ago

For fun I decided to give it a try with TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ (Because that's the model I have loaded for at the moment) and got a fun synopsis about a Fictional Narrative about Tom, a US Air Force Eagle, who struggled to find purpose and belonging after his early retirement due to injury. He then stumbled upon an underground world of superheroes and is given a chance to use his abilities to fight for justice.

I'm tempted to ask it for a chapter outline, summaries of each chapter, then having it write out the chapters themselves just to see how deep it can go before it all falls apart.

LLMs have many limitations, but can be quite entertaining.