this post was submitted on 25 Dec 2023
1903 points (97.9% liked)

People Twitter

5234 readers
1112 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] candle_lighter@lemmy.ml 154 points 10 months ago (8 children)

I want said AI to be open source and run locally on my computer

[–] CeeBee@lemmy.world 38 points 10 months ago (1 children)

It's getting there. In the next few years as hardware gets better and models get more efficient we'll be able to run these systems entirely locally.

I'm already doing it, but I have some higher end hardware.

[–] Xanaus@lemmy.ml 4 points 10 months ago (2 children)

Could you please share your process for us mortals ?

[–] CeeBee@lemmy.world 6 points 10 months ago

Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It's lightweight, fast, and gives really good results.

I have some beefy hardware that I run it on, but it's not necessary to have.

[–] Ookami38@sh.itjust.works 2 points 10 months ago (1 children)

Depends on what AI you're looking for. I don't know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven't really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it's a simple enough process to get started, there's tons of info online about it, and it's all run on local hardware.

[–] CeeBee@lemmy.world 2 points 10 months ago (1 children)

I don't know of an LLM that works decently on personal hardware

Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

[–] ParetoOptimalDev 1 points 10 months ago

If you have really low specs use the recently open sourced Microsoft Phi model.

[–] TalesFromTheKitchen@lemmy.ml 20 points 10 months ago* (last edited 10 months ago) (1 children)

I can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi

Automatic1111 image generation

They might not be the most performant applications but they are very easy to use.

[–] sukhmel@programming.dev 5 points 10 months ago (3 children)

You seem to have missed the point a bit

[–] TalesFromTheKitchen@lemmy.ml 13 points 10 months ago (1 children)

Just read it again and you're right. But maybe someone else finds it useful.

[–] Tippon@lemmy.dbzer0.com 10 points 10 months ago

I do, so thank you :)

[–] intensely_human@lemm.ee 0 points 10 months ago (1 children)

“I wish I had X”

“Here’s X”

What point was missed here?

[–] sukhmel@programming.dev 0 points 10 months ago

The post "I wish X instead of Y"
The comment: "And run it [X] locally"
The next comment: "You can run Y locally"

Also the one I told this literally admitted that I was right and you're arguing still

[–] tegs_terry@feddit.uk 8 points 10 months ago

I want mine in an emotive-looking airborne bot like Flubber

[–] art@lemmy.world 8 points 10 months ago (2 children)

This technology will be running on your phone within the next few years.

[–] icepick3455o65@lemmy.world 2 points 10 months ago (1 children)

Because like every other app on smartphones it'll require an external server to do all of the processing

[–] art@lemmy.world 3 points 10 months ago

I mean, that's already where we are. The future is going to be localized.

[–] pkill@programming.dev 1 points 10 months ago (1 children)

Yeah if your willing to carry a brick or at least a power bank (brick) if you don't want it to constantly overheat or deal with 2-3 hours of battery life. There's only so much copper can take and there are limits to minaturization.

[–] art@lemmy.world 7 points 10 months ago (1 children)

It's not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.

[–] MenacingPerson@lemm.ee 1 points 10 months ago

wrong.

if that existed, all those AI server farms wouldn't be so necessary, would they?

dedicated hardware for that already exists, it definitely isn't gonna be able to fit a sizeable model on a phone any time soon. models themselves require multiple tens of gigabytes of storage space. you won't be able to fit more than a handful on even a 512gb internal storage. the phones can't hit the ram required for these models at all. and the dedicated hardware still requires a lot more power than a tiny phone battery.

[–] PsychedSy@sh.itjust.works 6 points 10 months ago

A lot of it can if you have a big enough computer.

[–] aubertlone@lemmy.world 3 points 10 months ago

Hey me too.

And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.

On the bright side there are many open source llms, and it seems like there's more everyday.

[–] Grappling7155@lemmy.ca 2 points 10 months ago

Checkout /r/localLlama, Ollama, and Mistral.

This is all possible and became a lot easier to do recently.