this post was submitted on 01 Dec 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I've got MacBook Pro M1 16GB. In order to run deepseek-coder with 6.7b parameters, I need to reduce context, as it haven't got much ram. So, how can it affect this model performance? How far I can go reducing context?

EDIT: I may have used the wrong word. Instead of performance, I meant accuracy. Sorry for my bad English

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here