this post was submitted on 17 Nov 2024
78 points (100.0% liked)

Technology

967 readers
109 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
 

This is legit.

This bubble can't pop soon enough.

you are viewing a single comment's thread
view the rest of the comments
[–] amemorablename@lemmygrad.ml 1 points 12 hours ago

If there has to be AI, it has to be open source AI!

Not to sound like an ad, but this is where I appreciate NovelAI as a service. Even though it's not open source and is a paid service, they have a good track record for letting adults use a model like an adult. They don't have investors breathing down their necks and they made it encrypted from the start, so you can do whatever you want with text gen and not worry about it being read by some programmer who's using it to train a model or whatever.

As you can imagine, this makes them behind the big corps who are taking ungodly amounts of investor funding, but their latest is pretty good. Not as "smart" as the best models have ever been and mainly storytelling focused, but pretty good.

So in other words, within the capitalist model of things and AI being so expensive to host and train, they're one of the closest things I've seen to being in the same spirit as what open source AI could do for people without going quite that far.

Saying “thanks, now let’s look at” etc. In a study they found that if you told it to take a deep breath before answering it would send a slightly more accurate answer, apparently.

Reminds me of how with one model, it was like, saying "please" as part of a request would give slightly better results.

I use it for bug-solving and coding because it relies on an existing corpus of documentation so it’s generally reliable and pretty good at that, but I’m starting to hate having to write at length to describe exactly what I want it to do. It should be able to infer my intent, I think this is something an LLM could do innately.

I won't ramble on too much on this topic, but I'm sure I could go on at length on this point alone. It's a fascinating thing to me finding that sweet spot where an AI is designed like an extra limb for a person (metaphorically speaking, not talking about actual cybernetics). I think that's where it's most powerful, as opposed to implementations where we're trusting that what it's saying and doing is solid on its own. The means of interfacing where you tell the model in natural language what you want and it tries to give it to you is only one approach and there could probably be better. With storytelling focused AI, for example, you might use outlining and other such stuff to indirectly help the AI know what you want.

I did get some interesting answers if I primed it by saying “you are a marxist who has read the entirety of the Marxists Internet Archive”. Then exercise some human discretion when reading the output but it has allowed me to consider topics differently at times. Of course there’s also always the hallucinations machine phenomena where you second guess everything it tells you anyway because there’s no way to check if it’s actually true.

That's interesting. I experimented with a NovelAI model of trying to set up its role as a sort of marxist therapist, to avoid more individualist-feeling back and forth. I'm not sure how much difference it actually made, but it was similar, I think, to what you describe in the way that it has helped me consider things in ways I hadn't thought of at times. And yeah, the hallucination thing is a very real part of it. Occasionally there are times an LLM tells me something that I look up and it turns out it is real and I hadn't heard of it, but then there are also those times where I'm just taking what it says with a grain of salt as something to consider rather than as something grounded.