this post was submitted on 24 Jan 2025
44 points (97.8% liked)
Technology
1150 readers
69 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People often tend to underestimate the potential for new technology, but there are plenty of legitimate use cases already. For example, I practice speaking Mandarin using a LLM, it's great at doing conversation and correcting me when I say something grammatically wrong. They're also good for narrating audio books, generating subtitles, adding voice to games, etc. I also find they can be helpful when coding, it's often faster to get a model to point you in the right direction than searching for something on the internet. For example, I find they're great at crafting SQL queries. I often know what I want in a query, but might not know the specific syntax. I'm sure we'll be finding plenty of other use cases going forward especially as stuff like reasoning models starts to mature where they can actually explain the steps they use to arrive at a solution and can be corrected.
The power usage was basically the main legitimate argument against this tech, but now we're seeing that problem is already being addressed. I'm sure we'll continue to see even more improvements down the road.
how do you have AI setup yogthos?
I got DeepSeek running on ollama https://dev.to/shayy/run-deepseek-locally-on-your-laptop-37hl
but for language practice I'm just using this app https://www.superchinese.com/
What kind of specs do you need to run DeepSeek locally? A few months back, I tried using ollama to run some small llama models on my home laptop (Ubuntu with 16GB RAM and a weak Nvidia integrated GPU w/ nouveau drivers) at it was borderline unusable.
I got the deepseek-r1:14b-qwen-distill-fp16 version running with 32gb ram and a GPU here.