this post was submitted on 27 Nov 2024
213 points (94.6% liked)

Firefox

18050 readers
151 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS
 

They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Lojcs@lemm.ee 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

These are the answers they gave the first time.

Qwencoder is persistent after 6 rerolls.

Anyways, how do I make these use my gpu? ollama logs say the model will fit into vram / offloaing all layers but gpu usage doesn't change and cpu gets the load. And regardless of the model size vram usage never changes and ram only goes up by couple hundred megabytes. Any advice? (Linux / Nvidia) Edit: it didn't have cuda enabled apparently, fixed now

Nice.

Yea I don't trust any AI models for facts, period. They all just lie. Confidently. The smol model there at least tried and got it right at first... Before confusing the sentence context.

Qwen is a good model too. But if you wanted something to run home automation or do text summaroes, smol is solid enough. I'm using CPU so it's good enough.