TheKostins

joined 1 year ago
 

I've got MacBook Pro M1 16GB. In order to run deepseek-coder with 6.7b parameters, I need to reduce context, as it haven't got much ram. So, how can it affect this model performance? How far I can go reducing context?

EDIT: I may have used the wrong word. Instead of performance, I meant accuracy. Sorry for my bad English