from Hacker News

VQA and Image Chunking for MLLMs (GPT-4V and Gemini)

by joanfihu on 2/20/24, 11:28 PM with 1 comments

  • by joanfihu on 2/20/24, 11:30 PM

    Handling large images with resolutions higher than 2048x2048 remains a challenge for MLLMs.

    Image chunking splits large images into multiple smaller images. Then the model processes each image and uses information from all of them to answer a question.

    I managed to get it to work with a 3840px X 20294px web page screenshot. It also works with documents that have figures (tables, charts, illustrations, etc).