So to my understanding, this work reproduces DeepSeek R1's reinforcement learning mechanism in a very small language model.
The AI gets "rewards" (like points) for doing two things correctly:
Accuracy : Getting the right answer. For example, math answers must be in a specific format (e.g., inside a box) so a computer can easily check them. For coding problems, test cases verify if the code works.
Format : Using the <think> and <answer> tags properly. This forces the AI to organize its responses clearly.
So in this case, the training program can extract the model's answer by parsing <answer> tag. We can eval the answer and evaluate if it's correct or not. If it's correct give reward, else: no reward.
Create N such answers from a single question, create N reward array. This is enough for the RL algorithm to guide the model to be more smart.
I've been trying to follow the literature on PPO/GRPO as applied to LLMs. From what I understand, since reward is only given once the entire COT sequence is sampled, traditional RL techniques would require some form of credit-assignment to distribute that reward amongst individual tokens – which is where the critic/value network comes in, right?
Instead DeepSeek (with GRPO) seems to just omit that value function entirely and use only sparse rewards. How does this end up being more efficient, since I thought the sparse nature of rewards makes it harder to converge to the optimal policy?
I don't think it's only using sparse rewards because of the format rewards. The training recipe is pretty comprehensive and involves multiple stages.[1] The paper mentions that when only using the RL technique, the output is often not suitable for reading. (Language mixing, etc) That feels like a AlphaZero moment for LLMs?
The R1 paper says that they didn't use "process reward modeling". And the paper that introduced GPRO says that it can be used either with "outcome supervision" or "process supervision", with outcome supervision "only provid[ing] a reward at the end of each output". Put together, doesn't that imply R1 uses sparse rewards provided only at end of COT sequence?
Ah sorry, you might be right. I meant "sparse reward" as a reward system that is mostly 0 but occasionally 1. Your "sparse reward" means only providing reward at the end of each output.
I think the reward is relative to other sampled answers for the same question. This way the signal is strong at the very margin of what is possible with a given model and there is less noise in it with impossible or too easy questions.
There is some confusion - because they do compute that simple reward, but then they convert it to a relative value and call it advantage. And I think they use that advantage to update the model - not the base reward.
Yes you're right, in their paper I think they say the process of sampling multiple traces then taking relative rewards is supposed to monte-carlo approximate the value network? I don't really have the intuition for that, but it does make sense that rather than simply nudging probabilities in the direction of the trace with the highest absolute reward, you want to favor the trace which had the best reward relative to current state. E.g. for quick intuition if absolute rewards for traces were {0, 0, 0, 0.01} then using absolute rewards would only give a weak signal (nudge weights proportional to 0.01 * logprob) for the last trace, but using relative rewards (based on z-score) of 1.5 * logprob.
Not only that - if you have {0,0,0,0.01} - then the probability that you would get any reward at one shot would be very low. And also I have the intuition that giving the rewards to traces at the edge is more efficient - because the model needs only a small perturbation to get right. If you gave negative rewards to traces that are very far from being right - then the model might be steered in a wrong direction.
The part I found strange: these RL formulations give no reward for incorrect solutions, so unless there are training examples that are easy enough for the base model to solve, the RL process won’t do anything.
So is the actual magic that the base models are good enough to sometimes generate successful CoT output in their unmodified state? Or did I miss something in the R1 paper and the code here?
I think is where the relative rewards come to play - they sample many thinking traces and reward those that are correct. This works at the current 'cutting edge' for the model - exactly where it could be improved.
I was wondering the same thing. I feel there is too large of a gap between a raw base model and and a model that produces fully correct answers and follows a specific format. My guess is their rule base reward system is more nuanced than just correctness and format.
Yeah I find this part not clearly expressed as well. My best guess is that it's not simply binary "correct/incorrect" but rather the reward is made up of multiple parts (e.g. format + correctness) and structured in a way such that "close enough" answers still get some reward. From there I would expect that a base model might at least be able to "autocomplete" the format/style, at which point RL machinery would kick in to tune it to properly obey the format, and once that's mastered eventually correctness.
They did mention something about tuning on an un-SFT'd base model being much slower 'warming it up' with some existing reasoning traces.
We're still in the process of thinking through and fleshing out full details for remote MCP connections. This is definitely a good idea to include in the mix!
PSA: You can also use singleflight[1] to solve the problem. This prevents the thundering herd problem. Pocache is an interesting/alternative way to solve thundering herd indeed!
I'm confused by the decision in DoChan to return a channel (instead of accepting one supplied by the caller) and then, given that, also not to close that channel (is something else going to be sent to the channel in the future?). Both seem like strange/unnecessary design decisions.
Returning a channel avoids questions of what happens if sending to a caller-supplied channel blocks. DoChan returns a channel with a single-element buffer, so a single send to the channel will always succeed without blocking, even if the caller has lost interest in the result and discarded the channel.
DoChan doesn't close the channel because there isn't any reason to do so.
A non-blocking send would work just as well for that issue, is a standard part of the language, and would support user-supplied channels, but it would still be at risk of panicking when sending to a closed channel. I think there ought to be a safe way to send to a closed channel, but the language authors disagree, so that's not really on the library authors (though they could still recover from the panic).
However, not closing the channel you specifically chose to control all sending to is just lazy/rude. Even though the caller should receive from the channel once and then forget about it, closing the channel after sending would prevent incorrect subsequent receives from hanging forever.
All this having been said, contributing to these libraries seems better than complaining about them, but I don't know how the golang.org/x stuff is maintained; looks like this one is here: https://github.com/golang/sync
Closing the channel is pointless. I don't understand why people get obsessive about closing channels.
It's not needed by the garbage collector, it's not good practice. It's explicitly called out in the official go guide as unnecessary most of the time. [0]
If you have a channel that is only used a single time and then discarded, closing it is literally just wasting CPU cycles.
And definitely not "lazy/rude".
I illustrated why closing the channel is beneficial: the consumer of the channel may not be using it properly. Reading the unclosed channel more than once will hang. A stuck goroutine is rarely desirable. The cost of closing a channel is similar to the cost of bounds checking; it may not be free, but it's usually worth it. Agreed that this has no benefit to the garbage collector. I also think this is a pretty clear example of when you should close a channel, as pointed out by the Tour: to inform the consumer that no more values will ever be forthcoming.
A non-blocking send doesn't work in this case. Consider: User provides DoChan an unbuffered channel, and then reads a value from it. If the send is nonblocking and occurs before the user reads from the channel, the value is lost.
thank you for the recommendation, was a good read as well. I could even use it to replace how I'm handling the call suppression/debounce mechanism. Though I think Pocache does 1 extra thing, which is to keep the cache updated before it expires, i.e. for keys which are frequently fetched it'd serve up to date data always from the cache. If we only relied on call suppression, then the concurrent requests would just have to wait during the update stage, or the read-through mechanism would keep hitting the main database.
Actually, llama.cpp running on Apple silicon uses GPU(Metal Compute Shader) to inference LLM models. Token generation is also very memory bandwidth bottlenecked. On high end Apple silicon it's about 400MB/s to 800MB/s, comparable to NVIDIA RTX 4090, which has memory bandwidth of 1000MB/s. Not to mention that Apple silicon has unified memory architecture and has high memory models (128GB, up to 192GB), which is necessary to run large LLMs like Llama 3 70B, which roughly takes 40~75GB of RAM to work reasonably.
The number of people running llama3 70b on NVidia gaming GPUs is absolutely tiny. You're going to need at least two of the highest end 24 GB VRAM GPUs and even then you are still reliant on 4 bit quantization with almost nothing left for your context window.
To a first approximation, Kompute[1] is that. It doesn't seem to be catching on, I'm seeing more buzz around WebGPU solutions, including wonnx[2] and more hand-rolled approaches, and IREE[3], the latter of which has a Vulkan back-end.