by kellegous on 6/28/13, 2:35 PM with 26 comments
by sp332 on 6/28/13, 2:42 PM
by nod on 6/28/13, 5:32 PM
Worked successfully in Windows CMD for me, without using the \bin shell script:
cd C:\mihaip-readerisdead
set PYTHON_HOME=C:\mihaip-readerisdead
C:\path-to-py27 reader_archive\reader_archive.py --output-directory C:\mystuff
Locked up at 251K out of 253K items for me, though. Restarting... success! Looks like it might have locked up trying to start the "Fetching comments" section on my first try.by ccera on 6/29/13, 2:31 AM
I didn't read the instructions too well, so the half hour I spent carefully deleting gigantic/uninteresting feeds out of my subscriptions.xml file was all for naught. Because I didn't know I needed to specify the opml_file on the command line, the script just logged into my Reader account (i.e., it walked me through the browser-based authorization process) and downloaded my subscriptions from there -- including all the gigantic/uninteresting subscriptions that I did NOT care to download.
So now I've gone and downloaded 2,592,159 items, consuming 13 GB of space.
I'm NOT complaining -- I actually think it's AWESOME that this is possible -- but if you don't want to download millions of items, be sure to read the instructions and use the opml_file directive.
by Udo on 6/28/13, 3:33 PM
My only gripe would be the tool's inability to continue after a partial run, but since I won't be using this more than once that's probably OK.
All web services should have a handy CLI extraction tool, preferably one that can be run from a CRON call. On that note, I'm very happy with gm_vault, as well.
Edit: getting a lot of XML parse errors, by the way.
by DecoPerson on 6/28/13, 7:13 PM
Should we be concerned with errors like this?
[W 130629 03:11:54 api:254] Requested item id tag:google.com,2005:reader/item/afe90dad8acde78b (-5771066408489326709), but it was not found in the result
I'm getting ~1-2 per "Fetch N/M item bodies" line.by pixsmith on 7/2/13, 6:27 AM
Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.
by skilesare on 6/28/13, 3:33 PM
Thank you. mihaip, if you are ever in Houston I will buy you a beer/ and or a steak dinner.
by dmtelf on 6/30/13, 8:35 AM
echo %pythonpath% gives c:\readerisdead
I copied 'base' from the readerisdead zipfile to c:\python27\lib & also copied the base folder into the same folder as reader_archive.py
C:\readerisdead\reader_archive\reader_archive.py --output-directory C:\googlereader gives "ImportError: No module named site"
What am I doing wrong? How can I get this to work?
by drivebyacct2 on 6/28/13, 6:27 PM