this post was submitted on 27 Dec 2024
28 points (100.0% liked)

technology

23903 readers
286 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/24102825

DeepSeek V3 is a big deal for a number of reasons.

At only $5.5 million to train, it's a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.

It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.

The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.

The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.

The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta's Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.

DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.

you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmy.ml 4 points 7 months ago (1 children)

I've been playing with it a bit too, and it's pretty impressive. Incidentally, I saw a couple of promising approaches to help with the reasoning aspect of LLMs.

The first method is called the consensus game to address the issue of models giving different answers to the same question depending on how it’s phrased. The trick here is to align the generator which answers open-ended questions, and the discriminator which evaluates multiple-choice questions. By incentivizing them to agree on answers through a scoring system, the game improves the model’s consistency and accuracy without requiring retraining. https://www.wired.com/story/game-theory-can-make-ai-more-correct-and-efficient/

The second method is to use neurosymbolic systems that combine deep learning to identify patterns in data with reasoning based on knowledge using symbolic logic. It has the potential to outperform systems relying either solely on neural networks or symbolic logic while providing clear explanations for decisions. This involves encoding symbolic knowledge into a format compatible with neural networks, and mapping data from neural patterns back to symbolic representations.

https://arxiv.org/abs/2305.00813

The neurosymbolic approach in particular looks like a very promising way to get actual reasoning to start happening in these systems. It's gonna be interesting to see where this all goes in a few years.