Well, crap. Was just going back through my old, old, old posts (back beyond the 10,000 limit) and trying to save the pages manually. Then, about 20 pages into the process, I realized that there are still a lot of comments (behind the "[X] more comments" links) that I'm not capturing. Oh well; still better than nothing.
Amir,
LB's FF Hubby,
Big Joe Silenced,
Melly #FForever,
Jennifer Dittrich,
John (bird whisperer),
and
Hookuh Tinypants
liked this
If you have a *nix install and can pull down Ruby, Victor Ganata's script works a treat. No API limit either. Got all of Melly's posts back to 2006, and Laura's to 2004.
- Benny Bucko (Josh)
Micah's does the comments too, so while it has that benefit it's limited by the API. Victor's isn't API limited because it takes a different approach, but you only get the collapsed comments if there are many.
- Benny Bucko (Josh)
Thanks, Benny. I might see if I can get a Ruby instance running in a nix box.
- COMPLICATED MR. NOODLE
Looks like it's running on my MacBook. I'll leave it for a while and see how it goes. Thanks for the rec.
- COMPLICATED MR. NOODLE
No worries mate, if it looks like it's just looping through the same thing, occasionally saving an index*.html file then you're on your way.
- Benny Bucko (Josh)
Only thing to note is (at least in linux) the older/newer post links that you usually find at the top/bottom of each FriendFeed page don't link correctly on the outputted index*.html files because there's one too many "../" in the path. Easy fix though using sed in a terminal. The output files render perfectly otherwise, sans comment expansion.
- Benny Bucko (Josh)
Watch here: http://fatoprofugus.net/testing...
- Benny Bucko (Josh)
Victor said on his thread that he's going to write a correction script to fix the path issue. That's where he's putting the script when done I gather.
- Benny Bucko (Josh)