from Hacker News

A Revolution in How Robots Learn

by jsomers on 11/26/24, 12:08 PM with 21 comments

  • by m_ke on 11/26/24, 8:50 PM

    I did a review of state of the art in robotics recently in prep for some job interviews and the stack is the same as all other ML problems these days, take a large pretrained multi modal model and do supervised fine tuning of it on your domain data.

    In this case it's "VLA" as in Vision Language Action models, where a multimodal decoder predicts action tokens and "behavior cloning" is a fancy made up term for supervised learning, because all of the RL people can't get themselves to admit that supervised learning works way better than reinforcement learning in the real world.

    Proper imitation learning where a robot learns from 3rd person view of humans doing stuff does not work yet, but some people in the field like to pretend that teleoperation and "behavior cloning" is a form of imitation learning.

  • by drcwpl on 11/27/24, 2:31 PM

    One particularly fascinating aspect of this essay is the comparison between human motor learning and robotic dexterity development, particularly the concept of “motor babbling.” The author highlights how babies use seemingly random movements to calibrate their brains with their bodies, drawing a parallel to how robots are being trained to achieve precise physical tasks. This framing makes the complexity of robotic learning, such as a robot tying shoelaces or threading a needle, more relatable and underscores the immense challenge of replicating human physical intelligence in machines. For me it is also a vivid reminder of how much we take our own physical adaptability for granted.
  • by x11antiek on 11/26/24, 1:51 PM

  • by ratedgene on 11/26/24, 10:25 PM

    Hey, I wonder if we can use LLMs to learn learning patterns, I guess the bottleneck would be the curse of dimensionality when it comes to real world problems, but I think maybe (correct me if I'm wrong) geographic/domain specific attention networks could be used.

    Maybe it's like:

    1. Intention, context 2. Attention scanning for components 3. Attention network discovery 4. Rescan for missing components 5. If no relevant context exists or found 6. Learned parameters are initially greedy 7. Storage of parameters gets reduced over time by other contributors

    I guess this relies on there being the tough parts: induction, deduction, abductive reasoning.

    Can we fake reasoning to test hypothesis that alter the weights of whatever model we use for reasoning?

  • by Animats on 11/26/24, 7:15 PM

    A research result reported before, but, as usual, the New Yorker has better writers.

    Is there something which shows what the tokens they use look like?

  • by josefritzishere on 11/26/24, 10:02 PM

    There's a big asterisk on the word "learn" in that headline.
  • by codr7 on 11/26/24, 8:21 PM

    Oh my, that has to be one of the worst jobs ever invented.
  • by nobodywillobsrv on 11/27/24, 8:16 AM

    Anyone find is suspcious that all these paywalled fluff tech legacy media articles keep on ending up on hn? Feels like an op. Who in tech actually reads NYT for example?