by reqo on 2/7/25, 2:44 AM with 62 comments
by markisus on 2/7/25, 5:44 PM
- All simulated agents use the same neural net with the same weights, albeit with randomized rewards and conditioning vector to allow them to behave as different types of vehicles with different types of aggressiveness. This is like driving in a world where everyone is different copies of you, but some of your copies are in rush while others are patient. This allows backprop to optimize for a sort of global utility across the entire population.
- There is no modeling of occlusion effects. Instead, agents are given the state of nearby agents, but corrupted by random noise. In the real world, occluded nearby agents can be extremely close (think about a child running out from behind a parked car). The paper comments on this.
> Both Waymax and nuPlan construct observations, maps, and other actors with auto-labeling tools from realworld perception data. This brings occlusion, incorrect or missing traffic-light states, and obstacles revealed at the last moment. Despite the minimalistic noise modeling in GIGAFLOW, the GIGAFLOW policy generalizes zero-shot to these conditions.
- The resulting policy simulates agents that are human-like, even though the system has never seen humans drive. This is a great result when one considers other reinforcement learning projects produce extremely high performance agents that humans would consider to be abusive or pathological.
by nine_k on 2/7/25, 3:53 AM
by seaucre on 2/7/25, 3:49 PM
by dhbradshaw on 2/7/25, 4:04 PM
by mitthrowaway2 on 2/7/25, 4:55 AM
This isn't fake surprise. Sometimes I'll wake up and think, "who on earth were those guys and what were they trying to do? And yet their actions make sense..." or, "who came up with that punchline? It's legitimately funny and I never saw it coming, so it can't have been me..."
And yet I know it's all being generated by my own brain somehow. Through some kind of privileged access level.
And then I think about the bicameral brain structure. Does our brain have two halves so that it can function in a self-play training mode during sleep? Are each halves of my brain experiencing the same dream from opposite points of view?
Apologies for the tangent; this is almost totally unrelated to the article and probably something well known to neuroscience for decades. But still, it fascinates me, and the more we learn about the effectiveness of self-play in AI, the more I wonder.
by linux_devil on 2/7/25, 10:48 PM
by The28thDuck on 2/7/25, 4:48 AM
by dang on 2/7/25, 7:35 AM
by surume on 2/7/25, 12:12 PM