by rognjen on 2/9/25, 9:23 PM with 14 comments
by iknownothow on 2/10/25, 9:19 AM
I follow the rnd and progress in this space and I haven't heard anyone make a fuss about it. They are all LLMs or transformers or neural nets but they can be trained or optimized to do different things. For sure, there's terms like Reasoning models or Chat models or Instruct models and yes they're all LLMs.
But you can now start combining them to have hybrid models too. Are Omni models that handle audio and visual data still "language" models? This question is interesting in its own right for many reasons, but not to justify or bemoan the use of term LLM.
LLM is a good term, it's a cultural term too. If you start getting pedantic, you'll miss the bigger picture and possibly even the singularity ;)
by janalsncm on 2/10/25, 11:38 PM
Every time you see “X is just Y” you should think of emergent behaviors. Complexity is difficult to predict.
> R1 Zero has similar reasoning capabilities of R1 without requiring any SFT
In fact R1 zero was slightly better. This is an argument that RL and thinking tokens were a genuinely useful technique which I see as counter to the author’s thesis.
I also think a lot of what the author is referring to was more generously arguing against next token prediction (exact match of an answer) rather than the sequence-level rewards in R1.
by jmor23 on 2/10/25, 11:16 PM
“The architecture of the DeepSeek SYSTEM includes a model, and RL architecture that leverages symbolic rule.”
Marcus has long been a critic of deep learning and LLMs, saying they would “hit a wall”.
by throwaway314155 on 2/10/25, 6:51 PM
Would be nice if the author could cite even one example of this as it doesn't match my experience whatsoever.
by aaroninsf on 2/10/25, 6:28 PM
* applies only to meatreaders