this post was submitted on 03 Nov 2023
170 points (91.7% liked)

Technology

58431 readers
4569 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pixxelkick@lemmy.world 20 points 11 months ago (4 children)

Im curious to see what sorts of recommended minimum specs there will be for these features. It is my understanding that these sorts of models require a non negligible amount of horsepower to run in a timely manner.

At the moment I am running Nextcloud on some raspberry pis and, my gut tells me I might need a bit more oomph than that to handle this sort of real time AI prompting >_>;

[–] Dave@lemmy.nz 13 points 11 months ago (1 children)

The blog post states:

We build the AI Assistant using a flexible, solution-independent approach which gives you a choice between multiple large language models (LLM) and services. It can be fully hosted within your instance, processing all requests in-house, or powered by an external service.

So it sounds like you pick what works for you. I'd guess on a raspberry pi, on board processing would be both slow and poor quality, but I'll probably give it a go anyway.

[–] pixxelkick@lemmy.world 2 points 11 months ago (2 children)

Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.

[–] Dave@lemmy.nz 4 points 11 months ago

Some of the other Nextcloud stuff (like that chat stuff) isn't suitable on Raspberry Pi, I expect this will be the same. It's released though, right? Might have to have a play.

[–] EatYouWell@lemmy.world 2 points 11 months ago

You'd be surprised at how little computing power it can take, depending on the LLM.

[–] BrownianMotion@lemmy.world 9 points 11 months ago (1 children)

the AI that nextcloud is offering uses openAI, sign up get a api key and add it. Your ai requests goto the cloud. (and i couldnt get it to work, constant " too many request" or a straight "failed")

The other option is the addon " local llm", you download a cutdown llm like llama2 or falcon and it runs locally. I did get thoes all installed, but it didnt work for general prompts.

Nextcloud will probably fix things over time, and the developer who made the local llm plugin will to, but right now this isnt very useful to selfhosters.

[–] TheDarkKnight@lemmy.world 2 points 11 months ago (1 children)

Llama’s getting pretty damn good, check out phind.com if you haven’t yet…its programming better than GPT-4 supposedly!

[–] BrownianMotion@lemmy.world 2 points 11 months ago

I just asked it to write an assembly program for the Intel 8008 uprocessor, and it just knocked it out! That's not bad for a chip that was released in 1972 !

[–] PeachMan@lemmy.world 8 points 11 months ago (2 children)

Well, Nextcloud runs like shit on a Pi WITHOUT having to do AI stuff, so.....

[–] praise_idleness@sh.itjust.works 11 points 11 months ago* (last edited 11 months ago) (1 children)

People should stop suggesting Nextcloud for Rpi. I really love Nextcloud but it sometimes struggles even with somewhat decent machine.

[–] neshura@bookwormstory.social 0 points 11 months ago (1 children)

I love Nextcloud but it's just oh so painfully slow at times

[–] AtmaJnana@lemmy.world 0 points 11 months ago (1 children)

That's likely a problem with your configuration. Mine was slow too until I set up redis.

[–] neshura@bookwormstory.social 0 points 11 months ago

Great that you don't have any problems with it, I do at times. It's not a problem with the config either (at least not the Nextcloud config) because I set it up using the Nextcloud VM script, not manually. It's not slow all the time but when it feels all the slower for it.

[–] redcalcium@lemmy.institute 2 points 11 months ago (1 children)

Nextcloud would struggle on devices with low CPU performance and slow storage speed. A Pi checks all those box. You might increase the performance a bit by running nextcloud from an external SSD but there is no fixing the Pi's low CPU performance.

[–] PeachMan@lemmy.world 1 points 11 months ago (1 children)

I've tried running NextCloud from a system with a SATA SSD and a Core i7 using WSL.....and it still ran like shit.

[–] AtmaJnana@lemmy.world 1 points 11 months ago

Not using redis? Mine ran like shit and I almost gave up until I set up file locking and caching.

[–] lupec@lemm.ee 1 points 11 months ago* (last edited 11 months ago)

Yeah, I'm wondering the same and also figure the requirements will be pretty significant. Still, pretty happy with things like this and Home Assistant's recent work on local voice assistants.