408
top 50 comments
sorted by: hot top controversial new old
[-] PerogiBoi@lemmy.ca 91 points 5 months ago

Also check out LLM Studio and GPT4all. Both of these let you run private ChatGPT alternatives from Hugging Face and run them off your ram and processor (can also offload to GPU).

[-] Just_Pizza_Crust@lemmy.world 26 points 5 months ago

I'd also recommend Oobabooga if you're already familiar with Automatic1111 for Stable diffusion. I have found being able to write the first part of the bots response gets much better results and seems to make up false info much less.

[-] FaceDeer@kbin.social 10 points 5 months ago

There's also koboldcpp, which is fairly newbie friendly.

[-] Turun@feddit.de 3 points 5 months ago

And llama file, which is a chat bot in a single executable file.

[-] EarMaster@lemmy.world 8 points 5 months ago

I feel like you're all making these names up...but they were probably suggested by a LLM all together...

[-] tubbadu@lemmy.kde.social 10 points 5 months ago

Are they as good as chatgpt?

[-] PerogiBoi@lemmy.ca 40 points 5 months ago* (last edited 5 months ago)

Mistral is thought to be almost as good. I’ve used the latest version of mistral and found it more or less identical in quality of output.

It’s not as fast though as I am running it off of 16gb of ram and an old GTX 1060 card.

If you use LLM Studio I’d say it’s actually better because you can give it a pre-prompt so that all of its answers are within predefined guardrails (ex: you are glorb the cheese pirate and you have a passion for mink fur coats).

There’s also the benefit of being able to load in uncensored models if you would like questionable content created (erotica, sketchy instructions on how to synthesize crystal meth, etc).

[-] tsonfeir@lemm.ee 8 points 5 months ago

I’m sure that meth is for personal use right? Right?

[-] PerogiBoi@lemmy.ca 5 points 5 months ago

Absolutely. Synthesizing hard drugs is time consuming and a lot of hard work. Only I get to enjoy it.

[-] tsonfeir@lemm.ee 4 points 5 months ago

No one gets my mushrooms either ;)

[-] PerogiBoi@lemmy.ca 3 points 5 months ago
[-] tsonfeir@lemm.ee 3 points 5 months ago

I just buy my substrate online. I’m far less experimental than most. I just want it to work in a consistent way that yields an amount I can predict.

What I really want to grow is Peyote or San Pedro, but the slow growth and lack of sun in my location would make that difficult.

load more comments (9 replies)
load more comments (2 replies)
load more comments (2 replies)
load more comments (3 replies)
[-] tsonfeir@lemm.ee 3 points 5 months ago
load more comments (1 replies)
[-] webghost0101@sopuli.xyz 7 points 5 months ago

Something i am really missing is a breakdown of How good these models actually are compared to eachother.

A demo on hugging face couldnt tell me the boiling point of water while the authors own example prompt asked the boiling point for some chemical.

[-] MTK@lemmy.world 4 points 5 months ago* (last edited 5 months ago)
load more comments (2 replies)
[-] M500@lemmy.ml 5 points 5 months ago

I can't find a way to run any of these on my homeserver and access it over http. It looks like it is possible but you need a gui to install it in the first place.

load more comments (5 replies)
[-] stevedidWHAT@lemmy.world 79 points 5 months ago

Open source good, together monkey strong 💪🏻

Build cool village with other frens, make new things, celebrate as village

[-] Tja@programming.dev 14 points 5 months ago

Apes together *

[-] Zeon@lemmy.world 4 points 5 months ago* (last edited 5 months ago)

It's free / libre software, which is even better, because it gives you more freedom than just 'open-source' software. Make sure to check the licenses of software that you use. Anything based on GPL, MIT, or Apache 2.0 are Free Software licenses. Anyways, together monkey strong 💪

[-] TootSweet@lemmy.world 63 points 5 months ago

It seems like usually when an LLM is called "Open Source", it's not. It's refreshing to see that Jan actually is, at least.

[-] long_chicken_boat@sh.itjust.works 14 points 5 months ago* (last edited 5 months ago)

Jan is just a frontend. It supports various models under multiple licence. It also supports some proprietary models.

[-] drislands@lemmy.world 41 points 5 months ago
[-] blazeknave@lemmy.world 7 points 5 months ago

Marsha Marsha Marsha!

[-] wetferret@lemmy.world 10 points 5 months ago

I would also reccommend faraday.dev as a way to try out different models locally using either CPU or GPU. I believe they have a build for every desktop OS.

[-] randon31415@lemmy.world 8 points 5 months ago

I have recently been playing with llamafiles, particularly Llava which, as far as I know, is the first multimodal open source llm (others might exist, this is just the first one I have seen). I was having it look at pictures of prospective houses I want to buy and asking it if it sees anything wrong with the house.

The only problem I ran into is that window 10 cmd doesn't like the sed command, and I don't know of an alternative.

[-] ramjambamalam@lemmy.ca 8 points 5 months ago

Would it help to run it under WSL?

[-] Tja@programming.dev 4 points 5 months ago
[-] halva@discuss.tchncs.de 3 points 5 months ago

might be a good idea to use windows terminal or cmder and wsl instead of windows shells

[-] ripcord@lemmy.world 2 points 5 months ago

Install Cygwin and put it in your path.

You can use grep, awk, see, etc from either bash or Windows command prompt.

load more comments (4 replies)
[-] ElPussyKangaroo@lemmy.world 5 points 5 months ago

Any recommendations from the community for models? I use ChatGPT for light work like touching up a draft I wrote, etc. I also use it for data related tasks like reorganization, identification etc.

Which model would be appropriate?

[-] Falcon@lemmy.world 8 points 5 months ago

The mistral-7b is a good compromise of speed and intelligence. Grab it in a GPTQ 4bit.

load more comments (1 replies)
load more comments (1 replies)
[-] tubbadu@lemmy.kde.social 3 points 5 months ago* (last edited 5 months ago)
[-] Infiltrated_ad8271@kbin.social 12 points 5 months ago

The question is quickly answered as none is currently that good, open or not.

Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.

[-] Bipta@kbin.social 9 points 5 months ago

Local LLMs can beat GPT 3.5 now.

[-] Speculater@lemmy.world 5 points 5 months ago

I think a good 13B model running on 12GB of VRAM can do pretty well. But I'd be hard pressed to believe anything under 33B would beat 3.5.

[-] miss_brainfart@lemmy.ml 4 points 5 months ago* (last edited 5 months ago)

Asking as someone who doesn't know anything about any of this:

Does more B mean better?

[-] alphafalcon@feddit.de 5 points 5 months ago

B stands for Billion (Parameters) IIRC

load more comments (1 replies)
[-] Falcon@lemmy.world 5 points 5 months ago* (last edited 5 months ago)

Many are close!

In terms of usability though, they are better.

For example, ask GPT4 for an example of cross site scripting in flask and you'll have an ethics discussion. Grab an uncensored model off HuggingFace you're off to the races

load more comments (6 replies)
load more comments
view more: next ›
this post was submitted on 20 Jan 2024
408 points (95.9% liked)

Technology

55610 readers
2245 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS