by gklitt on 8/29/24, 5:32 PM with 22 comments
by shazami on 8/29/24, 7:53 PM
by dinobones on 8/29/24, 7:54 PM
100M context window means it can probably store everything you’ve ever told it for years.
Couple this with multimodal capabilities, like a robot encoding vision and audio into tokens, you can get autonomous assistants than learn your house/habits/chores really quickly.
by smusamashah on 8/29/24, 6:42 PM
1: https://github.com/hsiehjackson/RULER (RULER: What’s the Real Context Size of Your Long-Context Language Models)
by fsndz on 8/29/24, 8:01 PM
by Sakos on 8/29/24, 8:00 PM
> We’ve raised a total of $465M, including a recent investment of $320 million from new investors Eric Schmidt, Jane Street, Sequoia, Atlassian, among others, and existing investors Nat Friedman & Daniel Gross, Elad Gil, and CapitalG.
Yeah, I guess that'd do it. Who are these people and how'd they convince them to invest that much?
by anonzzzies on 8/30/24, 2:17 PM
by samber on 8/29/24, 6:48 PM
by htrp on 8/29/24, 9:45 PM