from Hacker News

Ask HN: Best way to keep the raw HTML of scraped pages?

by vitorbaptistaa on 11/11/22, 4:41 PM with 14 comments

I'm scraping information regarding civil servants' calendars. This is all public, text-only information. I'd like to keep a copy of the raw HTML files I'm scraping for historical purposes, and also in case there's a bug and I need to re-run the scrapers.

This sounds like a great usage for a forward proxy like Squid or Apache Traffic Server. However, I couldn't find in their docs a way to both:

* Keep a permanent history of the cached pages

* Access old versions of the cached pages (think Wayback Machine)

Does anyone know if this is possible? I could potentially mirror the pages using wget or httrack, but a forward cache is a better solution as the caching process is driven by the scraper itself.

Thanks!

  • by mdaniel on 11/11/22, 5:53 PM

    If you weren't already aware, Scrapy has strong support for this via their HTTPCache middleware; you can choose whether to have it actually behave like a cache, choosing to returned already scraped content if matched or merely to act as a pass-through cache: https://docs.scrapy.org/en/2.7/topics/downloader-middleware....

    Their OOtB storage does what the sibling comment says about sha1-ing the request and then sharding the output filename by the first two characters: https://github.com/scrapy/scrapy/blob/2.7.1/scrapy/extension...

  • by PaulHoule on 11/11/22, 5:05 PM

    Content addressable storage. Generate names with SHA-3, split off bits of the names into directories like

       name[0:2]/name[0:4]/name[0:6]/name
    
    to keep any of the directories from getting too big (even the filesystem can handle huge directories, various tools you use with it might not) Keep a list of where the files came from and other metadata so you can find things in a database.
  • by placidpanda on 11/11/22, 6:00 PM

    When doing this in the past, I settled on an sqlite database with one table that stores the compressed html (gzip or lzma) along with other columns (id/date/url/domain/status/etc.)

    Also made it easy to alert on when something broke (query the table for count(*) where status=error) and rerun the parser for failures.

  • by compressedgas on 11/11/22, 4:57 PM

    WARC.
  • by sbricks on 11/11/22, 4:50 PM

    i'd just apply intelligent file naming strategy, based on timestamps and urls. keep in mind, that a folder should not contain more than 1000 files or other folders, otherwise it's slow to list.
  • by nf-x on 11/11/22, 4:55 PM

    Did you try using some of the cheap cloud storage, like AWS S3?