from Hacker News

Paper claims perfect accuracy on image recognition datasets

by bicsi on 4/20/22, 11:53 AM with 4 comments

  • by version_five on 4/20/22, 12:02 PM

    At face value I'm suspicious, I thought that there were genuine ambiguities or label errors in some of those datasets that makes it very surprising you could even really define 100% accuracy.

    Also, reading the paper a bit, it's either badly written and I just don't understand what they're saying at all, or BS. It doesn't really explain anything about their implementation, it just says they did do and got 100% accuracy, and throws in a bunch of jargon. Maybe I'm just not familiar with this area enough, but the way it's laid out raises even more red flags

  • by bicsi on 4/20/22, 12:09 PM

    Also, can anybody share a more informal high-level intuition about what this ‘Learning with signatures’ approach is about? It seems to be a rather recent topic in Learning (paper cites 2019+ publications)
  • by hprotagonist on 4/20/22, 12:07 PM

    ok but AFHQ dataset, Four Shapes, MNIST and CIFAR10 are baby datasets; do this on COCO or pascal VOC or imagenet...
  • by Mengel67 on 4/20/22, 12:09 PM

    Awesome and interesting article. Great things you’ve always shared with us. Thanks. Just continue composing this kind of post. https://www.myaccountaccess.one/