
I was being cut off, I manage it with chunking techniques. They unfortunately took down the file so now I have no source to pull from.

I was being cut off, I manage it with chunking techniques. They unfortunately took down the file so now I have no source to pull from.

I was, and that is why it was taking so long for me to download as I use my custom downloader which uses various techniques to chunk the download. Unfortunately it seems like they’ve now removed the file completely so my downloader has no source to pull from and is stopped at 36gb.

some bad news, it looks like the data 9 zip file link doesn’t work anymore. They appear to have removed the file so my download stopped at 36gb. I’m not familiar with their site so is this normal for them to remove the files and maybe put them back again once they’ve reorganized them and at the same link location? or are we having to do the scrape of each pdf like another user has been doing?

yeah still chugging away slowly, it may take me a few days actually, it’s quite slow but so far it appears to be getting it.

I have various chunking techniques that I use. I adaptively modify the request size of the chunks as I’ve noticed at times the CDN will give large amounts then micro amounts. I haven’t figured out the exact backoff rate but I have retry mechanisms in place. The CDN is very annoying but so far my methods are working, just slow.

Ok great. As for comparing files. I would likely do a hash check. That shouldn’t be difficult to identify truly unique files. It’ll take a few days for a decent computer to generate all the hashes but it should be pretty automated. I’ll reach out once I have it completed.

I am downloading dataset 9 and should have the full 180gb zip done in a day. To confirm, the link on DOJ to the dataset 9 zip is now updated to be clean of CSAM or not? As much as I wish to help the cause, I do not want any of that type of material on my server unless permission has been given to host it for credible researchers only that need access to all files for their investigation, but I have no way of understanding what’s within legal rights to assist with redistributing the files to legitimate investigators and thus my plans to help create a torrent may be squashed. Please let me know.
i analyzed with AI my 36gb~ that I was able to download before they erased the zip file from the server.
Complete Volume Analysis Based on the OPT metadata file, here's what VOL00009 was supposed to contain: Full Volume Specifications - Total Bates-numbered pages: 1,223,757 pages - Total unique PDF files: 531,307 individual PDFs - Bates number range: EFTA00039025 to EFTA01262781 - Subdirectory structure: IMAGES\0001\ through IMAGES\0532\ (532 folders) - Expected size: ~180 GB (based on your download info) What You Actually Got - PDF files received: 90,982 files - Subdirectories: 91 folders (0001 through ~0091) - Current size: 37 GB - Percentage received: ~17% of the files (91 out of 532 folders) The Math Expected: 531,307 PDF files / 180 GB / 532 folders Received: 90,982 PDF files / 37 GB / 91 folders Missing: 440,325 PDF files / 143 GB / 441 folders ★ Insight ───────────────────────────────────── You got approximately the first 17% of the volume before the server deleted it. The good news is that the DAT/OPT index files are complete, so you have a full manifest of what should be there. This means: - You know exactly which documents are missing (folders 0092-0532)I haven’t looked into downloading the partials from archive.org yet to see if I have any useful files that archive.org doesn’t have yet from dataset 9.