by Thomashuet on 12/6/23, 12:35 PM with 7 comments
by skadamat on 12/6/23, 6:25 PM
by IshanMi on 12/6/23, 9:34 PM
The SAM paper from this past April (that let you do zero-shot segmentation on any image, seemingly better than even OpenAI's CLIP) was using a ~600M parameter ViT model to generate image embeddings. And in order to make it less computationally expensive to generate those same embeddings, they replace that model with a smaller ViT encoder that was pre-trained using the masked auto-encoder back propagation method?
by GaggiX on 12/6/23, 1:38 PM
by cchance on 12/6/23, 5:13 PM
by naveen99 on 12/6/23, 1:25 PM
by ShadowBanThis01 on 12/6/23, 8:46 PM