this post was submitted on 24 Jul 2024
53 points (90.8% liked)

Selfhosted

40198 readers
969 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

you are viewing a single comment's thread
view the rest of the comments
[–] umami_wasbi@lemmy.ml 3 points 3 months ago (1 children)

IMO LLMs are ok to get a head start of searching. Like got a vague idea of something but don't know the exact keywords. LLMs can help and use the output on whatever search engine you like. This saves a lots of time tinkering the right keywords.

[–] dwindling7373@feddit.it 0 points 3 months ago (1 children)

Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!

Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.

[–] kitnaht@lemmy.world 1 points 3 months ago (1 children)

Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you're completely wrong.

[–] dwindling7373@feddit.it 1 points 3 months ago (1 children)
[–] kitnaht@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (2 children)

Those were statements. Statements of fact.

Once the models are already trained, it takes almost no power to use them.

Yes, TRAINING the models uses an immense amount of power - but utilizing the training datasets locally consumes almost nothing. I can run the llama 7b set on a 15w Raspberry Pi for example. Just leaving my PC on uses 400w. This is all local -- Nothing entering or leaving the Pi. No communication to an external server, nothing being done on anybody else's server or any AWS instances, etc.

[–] dwindling7373@feddit.it 1 points 3 months ago

Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

It's like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.

[–] dwindling7373@feddit.it 1 points 3 months ago

Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

It's like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.