this post was submitted on 22 Nov 2024
21 points (100.0% liked)

Technology

998 readers
114 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Comprehensive49@lemmygrad.ml 1 points 1 month ago (3 children)

Why should I use this instead of Ollama? Ollama is considered the local AI standard and is supported by a ton of other open-source software. For example, you can connect Ollama with the Smart Connections plugin for Obsidian, which lets you chat with and analyze your Obsidian notes.

[–] yogthos@lemmygrad.ml 1 points 1 month ago (2 children)

This one is multimodal and can generate images.

[–] Comprehensive49@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago) (1 children)

You can run multimodal models like LLaVA and LLaMA on Ollama as well.

The AI models are coded and used in a way that makes them basically platform-agnostic, so the specific platform (Ollama, LocalAI, vLLM, llama.cpp, etc.) you run them with ends up being irrelevant.

Because of that, the only reasons to use one platform over another are if it's best for your specific use case (depends), it's the best supported (Ollama by far), or if it has the best performance (vLLM seems to win right now).

[–] yogthos@lemmygrad.ml 1 points 1 month ago

Ah gotcha, I thought Ollama was text only