44
submitted 1 year ago* (last edited 1 year ago) by iuseit@iusearchlinux.fyi to c/privacy@lemmy.ml

As this technology becomes more accessible you can only imagine the possibilities. I dont have a machine fast enough to make this worth it but i might fuck around with it.

top 7 comments
sorted by: hot top controversial new old
[-] webghost0101@lemmy.fmhy.ml 9 points 1 year ago

Seems the model used is gpt4all but i have yet to see a good explanation on what gpt4all does that makes it seem like its a trending for consumers which leads me to believe it really is just people confusing its name as being actually comparable to gpt4.

If you check the leaderboard on huggingface there are a whopping 37 open source large language models with better quality outputs then gpt4all.

Any good llm interface that allows you to run them locally should allow you to run any of them and you should play around with multiple cause they might all perform at very different speeds depending on your system. Personally i have have used up to Wizard-Vicuna-13B which is listed more then 10 spots above gpt4all and can provides me a decent but dumb conversation in a reasonably slow speed.

The biggest (64B) models will probably be too slow to 95% of consumers and will get you good gpt-3 like performance at best.

Unless someone can tell me otherwise i don't believe that running these for actual productive goals/more then playing around is something i can advise. And putting the focus on just the interface and not the model that does the work seems a bit salespeech like.

[-] Catsrules@lemmy.ml 7 points 1 year ago

Oh this looks really neat. Thanks for posting.

I don't see any kind of hardware requirements? Do you happen to know what is recommended? Lots of ram? GPU beefly CPU??

[-] nottheengineer@feddit.de 6 points 1 year ago

You'll want a GPU with a lot of VRAM (16GB or more) and system RAM that's about twice as big.

CPU doesn't matter since everything runs on the GPU.

It also works without a GPU (by running on the CPU), but that's very slow. In my opinion you shouldn't bother if you don't have a GPU that's up to the task.

[-] fulano@lemmy.eco.br 3 points 1 year ago

Ok, I can't run it :(

[-] Catsrules@lemmy.ml 1 points 1 year ago

Hmm maybe those intel GPUs might be worth a look.

[-] nottheengineer@feddit.de 1 points 1 year ago

Intel isn't there yet, the major frameworks only support AMD and Nvidia well enough for stuff like this.

[-] CptNoobCanoe@feddit.dk 1 points 1 year ago

Cool. I’ve been wanting to play around with something like this. Thanks for the share.

this post was submitted on 25 Jun 2023
44 points (100.0% liked)

Privacy

29884 readers
1098 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 4 years ago
MODERATORS