technology

23858 readers
325 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
1
4
Hexbear Code-Op (hexbear.net)
submitted 3 months ago* (last edited 3 months ago) by RedWizard@hexbear.net to c/technology@hexbear.net
 
 

Where to find the Code-Op

Wow, thanks for the stickies! Love all the activity in this thread. I love our coding comrades!


Hey fellow Hexbearions! I have no idea what I'm doing! However, born out of the conversations in the comments of this little thing I posted the other day, I have created an org on GitHub that I think we can use to share, highlight, and collaborate on code and projects from comrades here and abroad.

  • I know we have several bots that float around this instance, and I've always wondered who maintains them and where their code is hosted. It would be cool to keep a fork of those bots in this org, for example.
  • I've already added a fork of @WhyEssEff@hexbear.net's Emoji repo as another example.
  • The projects don't need to be Hexbear or Lemmy related, either. I've moved my aPC-Json repo into the org just as an example, and intend to use the code written by @invalidusernamelol@hexbear.net to play around with adding ICS files to the repo.
  • We have numerous comrades looking at mainlining some flavor of Linux and bailing on windows, maybe we could create some collaborative documentation that helps onboard the Linux-curious.
  • I've been thinking a lot recently about leftist communication online and building community spaces, which will ultimately intersect with self-hosting. Documenting various tools and providing Docker Compose files to easily get people off and running could be useful.

I don't know a lot about GitHub Orgs, so I should get on that, I guess. That said, I'm open to all suggestions and input on how best to use this space I've created.

Also, I made (what I think is) a neat emblem for the whole thing:

Todos

  • Mirror repos to both GitHub and Codeberg
  • Create process for adding new repos to the mirror process
  • Create a more detailed profile README on GitHub.

Done

spoiler

  • ~~Recover from whatever this sickness is the dang kids gave me from daycare.~~
2
3
4
5
6
7
39
matrix is cooked (blog.cyrneko.eu)
submitted 18 hours ago* (last edited 18 hours ago) by cerealkiller@hexbear.net to c/technology@hexbear.net
8
 
 
9
10
11
12
13
14
 
 

This paper introduces DiffuCoder, a 7B-scale open-source masked diffusion large language model (dLLM) specifically designed for code generation.

The research provides insights into how dLLMs generate content, distinguishing their decoding behavior from that of autoregressive (AR) models. Unlike AR models, dLLMs can intrinsically adjust their generation causality and increasing sampling temperature diversifies not just token choices but also their generation order, creating a rich search space for reinforcement learning (RL).

This flexibility allows dLLMs to be more non-autoregressive and generate tokens in a less sequential, more “human-like” code writing manner.

To leverage this diversity and improve performance, the paper proposes coupled-GRPO RL algorithm. This method utilizes a coupled-sampling scheme that constructs complementary mask noise during training to reduce the variance of token log-likelihood estimates while maintaining training efficiency.

Experimentally, coupled-GRPO significantly boosts DiffuCoder’s performance on code generation benchmarks, notably improving EvalPlus scores by 4.4% with training on only 21K samples. The research also shows that coupled-GRPO trained models experience a smaller performance drop when decoding steps are halved (resulting in a 2x speedup), indicating increased parallelism and reduced reliance on AR bias during decoding.

available at https://huggingface.co/apple/DiffuCoder-7B-cpGRPO

15
16
17
18
19
20
 
 
21
22
 
 

In modern LLM applications like RAG and Agents, the model is constantly fed new context. For example, in RAG, we retrieve relevant documents and stuff them into the prompt.

The issue is that this dynamically retrieved context doesn't always appear at the beginning of the input sequence. Traditional KV caching only reuses a "common prefix," so if the new information isn't at the very start, the cache hit rate plummets, and your GPU ends up recomputing the same things over and over.

CacheBlend changes the game by allowing for the reuse of pre-computed KV caches regardless of their position in the input sequence.

This makes it possible to achieve a 100% KV Cache hit rate in applications like RAG. The performance gains are significant:

  • Faster Time-To-First-Token (TTFT): Get your initial response much quicker.
  • More Throughput: Serve significantly more users with the same hardware.
  • Almost lossless Output Quality: All of this is achieved with little degradation in the model's generation quality.

CacheBlend works by intelligently handling the two main challenges of reusing non-prefix caches:

  • Positional Encoding Update: It efficiently updates positional encodings to ensure the model always knows the correct position of each token, even when we're stitching together cached and new data.
  • Selective Attention Recalculation: Instead of recomputing everything, it strategically recalculates only the minimal cross-attention needed between the new and cached chunks to maintain perfect generation quality.

An interactive CacheBlend demo is available at: https://github.com/LMCache/LMCache-Examples/tree/main/demo-rag-blending

23
24
25
view more: next ›