Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

  • acelee1012@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Has anyone made a Dataset 9 and 10 torrent file without the files in it that the NYT reported as potentially CSAM?

  • activeinvestigator@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Do people here have the partial dataset 9? or are you all missing the entire set? There is a magnet link floating around for ~100GB of it, the one removed in the OP

    I am trying to figure out exactly how many files dataset 9 is supposed to have in it. Before the zip file went dark, I was able to download about 2GB of it. This was today, maybe not the original zip file from jan 30th In the head of the zip file is an index file, VOL00009.OPT, you don’t need the full download in order to read this index file. The index file says there are 531,307 pdfs the 100GB torrent has 531,256, it’s missing 51 pdfs. I checked the 51 file names and they no longer exist as individual files on the DOJ website either. I’m assuming these are the CSAM.

    note that the 3M number of released documents != 3M pdfs. each pdf page is counted as a “document”. dataset 9 contains 1,223,757 documents, and according to the index, we are missing only 51 documents, they are not multipage. In total, I have 2,731,789 documents from datasets 1-12, short of the 3M number. the index I got also was not missing document ranges

    it’s curious that the zip file had an extra 80GB when only 51 documents are missing. I’m currently scraping links from the DOJ webpage to double check the filenames

    • Arthas@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      45 minutes ago

      i analyzed with AI my 36gb~ that I was able to download before they erased the zip file from the server.

      Complete Volume Analysis
      
        Based on the OPT metadata file, here's what VOL00009 was supposed to contain:
      
        Full Volume Specifications
      
        - Total Bates-numbered pages: 1,223,757 pages
        - Total unique PDF files: 531,307 individual PDFs
        - Bates number range: EFTA00039025 to EFTA01262781
        - Subdirectory structure: IMAGES\0001\ through IMAGES\0532\ (532 folders)
        - Expected size: ~180 GB (based on your download info)
      
        What You Actually Got
      
        - PDF files received: 90,982 files
        - Subdirectories: 91 folders (0001 through ~0091)
        - Current size: 37 GB
        - Percentage received: ~17% of the files (91 out of 532 folders)
      
        The Math
      
        Expected:  531,307 PDF files / 180 GB / 532 folders
        Received:   90,982 PDF files /  37 GB /  91 folders
        Missing:   440,325 PDF files / 143 GB / 441 folders
      
         Insight ─────────────────────────────────────
        You got approximately the first 17% of the volume before the server deleted it. The good news is that the DAT/OPT index files are complete, so you have a full manifest of what should be there. This means:
        - You know exactly which documents are missing (folders 0092-0532)
      

      I haven’t looked into downloading the partials from archive.org yet to see if I have any useful files that archive.org doesn’t have yet from dataset 9.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      thats pretty cool…

      Can you send me a DM of the 51? if i come across one and it isnt some sketchy porn i’ll let u know

  • TavernerAqua@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    In regard to Dataset 9, it’s currently being shared on Dread (forum).

    I have no idea if it’s legit or not, and Idc to find out after reading about what’s in it from NYT.

  • Wild_Cow_5769@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    @wild_cow_5769:matrix.org If someone has a group working on finding the dataset.

    There are billions of people on earth. Someone downloaded dataset 9 before the link was taken down. We just have to find them :)

  • DigitalForensick@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    13 hours ago

    While I feel hopeful that we will be able to reconstruct the archive and create some sort of baseline that can be put back out there, I also cant stop thinking about the “and then what” aspect here. We’ve see our elected officials do nothing with this info over and over again and I’m worried this is going to repeat itself.

    I’m fully open to input on this, but I think having a group path forward is useful here. These are the things I believe we can do to move the needle.

    Right Now:

    1. Create a clean Data Archive for each of the known datasets (01-12). Something that is actually organized and accessible.
    2. Create a working Archive Directory containing an “itemized” reference list (SQL DB?) the full Data Archive, with each document’s listed as a row with certain metadata. Imagining a Github repo that we can all contribute to as we work. – File number – Dir. Location – File type (image, legal record, flight log, email, video, etc.) – File Status (Redacted bool, Missing bool, Flagged bool
    3. Infill any MISSING records where possible.
    4. Extract images away from .pdf format, Breakout the “Multi-File” pdfs, renaming images/docs by file number. (I made a quick script that does this reliably well.)
    5. Determine which files were left as CSAM and “redact” them ourselves, removing any liability on our part.

    What’s Next: Once we have the Archive and Archive Directory. We can begin safely and confidently walking through the Directory as a group effort and fill in as many files/blanks as possible.

    1. Identify and dedact all documents with garbage redactions, (remember the copy/paste DOJ blunders from December) & Identify poorly positioned redaction bars to uncover obfuscated names
    2. LABELING! If we could start adding labels to each document in the form of tags that contain individuals, emails, locations, businesses - This would make it MUCH easier for people to “connect the dots”
    3. Event Timeline… This will be hard, but if we can apply a timeline ID to each document, we can put the archive in order of events
    4. Create some method for visualizing the timeline, searching, or making connection with labels.

    We may not be detectives, legislators, or law men, but we are sleuth nerds, and the best thing we can do is get this data in a place that can allow others to push for justice and put an end to this crap once and for all. Its lofty, I know, but enough is enough. …Thoughts?

    • PeoplesElbow@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      We definitely need a crowdsourced method for going through all the files. I am currently building a solo cytoscape tool to try out making an affiliation graph, but expanding this to be a tool for a community, with authorization to just allow whitelisted individuals work on it, that’s beyond my scope and I can’t volunteer to make such an important tool, but I am happy to offer my help building it. I can convert my existing tool to a prototype if anyone wants to collaborate with me on it. I am an amateur, but I will spend all the Cursor Credits on this.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      12 hours ago

      GFD….

      My 2 cents. As a father of only daughters…

      If we don’t weed out this sick behavior as a society we never will.

      My thoughts are enough is enough.

      Once the files are gone there is little to 0 chance they are ever public again….

      You expect me to believe that a “oh shit we messed up” was accident?

      It’s the perfect excuse… so no one looks at the files.

      That’s my 2 cents.

  • Arthas@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    11 hours ago

    some bad news, it looks like the data 9 zip file link doesn’t work anymore. They appear to have removed the file so my download stopped at 36gb. I’m not familiar with their site so is this normal for them to remove the files and maybe put them back again once they’ve reorganized them and at the same link location? or are we having to do the scrape of each pdf like another user has been doing?

  • Wild_Cow_5769@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    12 hours ago

    All the zip download links are gone on the DOJ website.

    It’s only a matter of time before all the files just go poof.

  • kongstrong@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    15 hours ago

    PSA: paging bug has been fixed on the DOJ’s website. Website caps out at around 9600 for ~197k files, way less than the 520k in the less-complete dataset 9 torrent. Scraping the website now to find out which files they took offline.

    Correction: 9600*50 files per page is in the 470k ballpark. Much more tan 197k but still a lot less than the torrent’s 530k let alone the expected 600k+ files that were supposed to be in there

      • Wild_Cow_5769@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        14 hours ago

        This entire thing smells funny. Even OP turned ghost on the threat of suspect images that no one has seen…

        Ask yourself. How did the times or whoever came up with this narrative even find these “suspect” images in a few hours when it seems no one in the world came even download the zip…

        • kutt@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          A person made a website just to host links and thumbnails for a better interface to the videos on the DoJ website.

          They deleted everything including their account the same day.

          Everyone. I know website is showing all blank. This is unfortunately the end of my little project. Due to certain circumstances, I had to take it down. Thank you everyone for supporting me and my effort.

          Edit: Link

  • acelee1012@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    12 hours ago

    is anyone else having issues getting dataset 10 11* to start downloading? it has been sittiing at 0 percent for a day while everything else is done and seeding. it shows connections to peers, rechecking does nothing, deleting and re-adding does nothing, asking tracker for more peers does nothing

      • acelee1012@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        i am not seeing any errors, has just been stuck on downloading status with nothing going through. I originally added everything around the same time and all the other ones went through fine. I figured it was bugged or something so removed then readded it several times to no avail. I am not sure what else to try

    • Nomad64@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      I have been seeding all of the datasets since Sunday. The copy of set 9 has been the busiest, with set 10 a distant second. I plan on seeding them for quite a while yet, and also picking up a consolidated torrent when that becomes available. Hopefully you are able to get connected via the Swarm.

      • acelee1012@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        is there something I am missing on why it isn’t connected given how much time and attempt to redo it? is it just an eventually thing?

  • Wild_Cow_5769@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    19 hours ago

    Let me ask a question.

    For all the folks saying there are news reports of CSAM… Does that mean the news outlets got the full zip? How did they get it? No one else seems to be able to get it. Were they given it fist?

    If they don’t have the zip how did they even find it within hours of the files being released?

    Did they provide proof where they redacted the “danger” and said look… here is the proof?

    Seems rather suspect…

    Considering the massive effort of regular to comb through the files I would think the outcry would be gigantic….

    • DigitalForensick@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      I’m not sure of the exact files that were reported by the NYT, but there certainly were some concerning images in the initial Jan 30 release, however it was certainly more than the reported 40. I saw others as well but I don’t remember what the file numbers we’re.

      spoiler

      [246249_247010]

      From my own observation timeline on the images in question: Jan 30: Images were accessible through DOJ directly. File numbers wereskipped in the list, but were manually reachable through URL. All these photos were fully unredacted (uncensored). **Feb 1: ** Images were NOT accessible through DOJ anymore, returns “Page not found”. However images were (and still are) snapshotted via web.archive.org. Feb 2: Downloading the 87GB Set 9 appeared contain these images as well, meaning we likely all have them on our computers. yikes

      These files were scrubbed from the DOJ website, along with many others.

      I found many of the scrubbed files by parsing through the lists and finding large gaps in file numbers, where the preceding file did not contain multiple images/documents in one pdf. There are also tons of internal memos in the dataset that precede file groups and talk about the content ahead. These memos surrounded files that seemed like they were meant to be redacted, so its worth poking around. I didn’t go nuts, but things I found around these that interesting and were also removed:

      • [EFTA00276493]: internal memo referring to Clinton photographed with “nude Gretchen”.
      • [EFTA00273790-EFTA276487]: (removed) looks like arial Lidar scans of the full estate?
      • [EFTA00276220]: (removed) panoramic Infrared / xray-ray scan of a room
      • BWint@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        13 hours ago

        One Redditor said that they reported more than 500 nude images to the DOJ, all from Dataset 9.

    • BWint@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      We didn’t have trouble getting Datasets 10, 11, or 12. I think Dataset 9 was probably delivered fine on Friday, so the NYTimes was able to grab a complete copy. Then, NYTimes started reporting the abusive material, which prompted the DOJ to yoink the ZIP, and it’s been screwy ever since.

      I saw a post from a random Redditor confirming that they found abusive material, if that’s the concern. I doubt that the reports are fabricated, but I also agree that the reports are a great excuse for the DOJ to remove legitimate files.

    • captainmycaptain@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      I’m still waiting for just the first zip file to uncompress and it’s been HOURS. The ONLY reasonable explanation to bolster the NYT claim is that they put “AI” on the datasets running on a supercomputer, and “caught” the DOJ distributing CP! Show us the proof NYT! (redact faces and genitalia and show the images!) Then: CONVICT THEM ALL! LIFE IN PRISON FOR THE ENTIRE DOJ!!! ;-P

    • kongstrong@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      if what you’re saying is that CSAM seems like a very good excuse to redact a lot more of those files than they previously intended, I agree yes.

  • PeoplesElbow@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    Ok everyone, I have done a complete indexing of the first 13,000 pages of the DOJ Data Set 9.

    KEY FINDING: 3 files are listed but INACCESSIBLE

    These appear in DOJ pagination but return error pages - potential evidence of removal:

    EFTA00326497

    EFTA00326501

    EFTA00534391

    You can try them yourself (they all fail):

    https://www.justice.gov/epstein/files/DataSet 9/EFTA00326497.pdf

    The 86GB torrent is 7x more complete than DOJ website

    DOJ website exposes: 77,766 files

    Torrent contains: 531,256 files

    Page Range Min EFTA Max EFTA New Files


    0-499 EFTA00039025 EFTA00267311 21,842

    500-999 EFTA00267314 EFTA00337032 18,983

    1000-1499 EFTA00067524 EFTA00380774 14,396

    1500-1999 EFTA00092963 EFTA00413050 2,709

    2000-2499 EFTA00083599 EFTA00426736 4,432

    2500-2999 EFTA00218527 EFTA00423620 4,515

    3000-3499 EFTA00203975 EFTA00539216 2,692

    3500-3999 EFTA00137295 EFTA00313715 329

    4000-4499 EFTA00078217 EFTA00338754 706

    4500-4999 EFTA00338134 EFTA00384534 2,825

    5000-5499 EFTA00377742 EFTA00415182 1,353

    5500-5999 EFTA00416356 EFTA00432673 1,214

    6000-6499 EFTA00213187 EFTA00270156 501

    6500-6999 EFTA00068280 EFTA00281003 554

    7000-7499 EFTA00154989 EFTA00425720 106

    7500-7999 (no new files - all wraps/redundant)

    8000-8499 (no new files - all wraps/redundant)

    8500-8999 EFTA00168409 EFTA00169291 10

    9000-9499 EFTA00154873 EFTA00154974 35

    9500-9999 EFTA00139661 EFTA00377759 324

    10000-10499 EFTA00140897 EFTA01262781 240

    10500-12999 (no new files - all wraps/redundant)

    TOTAL UNIQUE FILES: 77,766

    Pagination limit discovered: page 184,467,440,737,095,516 (2^64/100)

    I searched random pages between 13k and this limit - NO new documents found. The pagination is an infinite loop. All work at: https://github.com/degenai/Dataset9

    • PeoplesElbow@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      DOJ Epstein Files: I found what’s around those 3 missing files (Part 2)

      Follow-up to my Dataset 9 indexing post. I pulled the adjacent files from my local copy of the torrent. What I found is… notable.


      TLDR

      The 3 missing files aren’t random corruption. They all cluster around one event: Epstein’s girlfriend Karyna Shuliak leaving St. Thomas (the island) in April 2016. And one of the gaps sits directly next to an email where Epstein recommends her a novel about a sympathetic pedophile—two days before the book was publicly released.


      The Big Finding: Duplicate Processing Batches

      Two of the missing files (326497 and 534391) are the same document processed twice—once with redactions, once without—208,000 files apart in the index.

      Redacted Batch Unredacted Batch Content
      326494-326496 534388-534390 AmEx travel booking, staff emails
      326497 - MISSING 534391 - MISSING ???
      326498-326500 Email chain continues
      326501 - MISSING ???
      326502-326506 Reply + Invoice
      534392 Epstein personal email

      Random file corruption hitting the same logical document in two separate processing runs, 208,000 positions apart? That’s not how corruption works. That’s how removal works.


      What’s Actually In These Files

      I pulled everything around the gaps. It’s all one email chain from April 10, 2016:

      The event: Karyna Shuliak (Epstein’s girlfriend) booked on Delta flight from Charlotte Amalie, St. Thomas → JFK on April 13, 2016.

      St. Thomas is where you fly in/out to reach Little St. James. She was leaving the island.

      The chain:

      • 11:31 AM — AmEx Centurion (black card) sends confirmation to lesley.jee@gmail.com
      • 11:33 AM — Lesley Groff (Epstein’s executive assistant) forwards to Shuliak, CC’s staff
      • 11:35 AM — Shuliak replies “Thanks so much”
      • 3:52 PM — Epstein personally emails Shuliak
      • Next day — AmEx sends invoice

      The unredacted batch (534xxx) reveals the email addresses that are blacked out in the redacted batch (326xxx):


      The Epstein Email (EFTA00534392)

      The document immediately after missing file 534391:

      From: "jeffrey E." <jeevacation@gmail.com>
      To: Karyna Shuliak
      Date: Sun, 10 Apr 2016 19:52:13 +0000
      
      order http://softskull.com/dd-product/undone/
      

      He’s telling her to buy a book. The same day she’s being booked to leave his island.


      The Book

      “Undone” by John Colapinto (Soft Skull Press)

      On-sale date: April 12, 2016
      Epstein’s email: April 10, 2016

      He recommended it two days before public release.

      Publisher’s description:

      “Dez is a former lawyer and teacher—an ephebophile with a proclivity for teenage girls, hiding out in a trailer park with his latest conquest, Chloe. Having been in and out of courtrooms (and therapists’ offices) for a number of years, Dez is at odds with a society that persecutes him over his desires.

      The protagonist is a pedophile who resents society for judging him.

      The author (John Colapinto) is a New Yorker staff writer, former Vanity Fair and Rolling Stone contributor. Exactly the media circles Epstein cultivated.


      What’s Missing

      So now we know the context:

      • EFTA00326497 — Between AmEx confirmation and Groff’s forward. Probably the PDF ticket attachment referenced in the emails.

      • EFTA00326501 — Between the forward chain and Shuliak’s reply. Unknown.

      • EFTA00534391Immediately before Epstein’s personal email about the pedo book. Unknown, but its position is notable.


      Open Questions

      1. How did Epstein have this book before release? Advance copy? Knows the author?

      2. What is 534391? It sits between staff logistics emails and Epstein’s direct correspondence. Another Epstein email? An attachment?

      3. Are there other Shuliak travel records with similar gaps? Is April 2016 unique or part of a pattern?

      4. What else is in the corpus from jeevacation@gmail.com?


      Verify It Yourself

      Try the DOJ links (all return errors):

      Check the torrent: Pull the EFTA numbers I listed. Confirm the gaps. Confirm the adjacencies.

      Grep the corpus: Search for “QWURMO” (booking reference), “Shuliak”, “jeevacation”, “Colapinto”


      Summary

      Three files missing from 531,256. All three cluster around one girlfriend’s April 2016 departure from St. Thomas. Same gaps appear in two processing batches 208,000 files apart. One gap sits adjacent to Epstein personally recommending a novel about a sympathetic pedophile, sent before the book was even publicly available.

      This isn’t random corruption.

      Full analysis + all code: https://github.com/degenai/Dataset9


      If anyone has the torrent and wants to grep for Colapinto connections or other Shuliak trips, please do. This is open source for a reason.

    • kongstrong@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      17 hours ago

      ysk the page limit has been fixed, it caps out around 9600 for a total of ~197k file entries. Way less than the largest torrent’s 530k. Scraping now to get a list of the files they kept on the DOJ so we can determine which files they don’t want out there. Would be a good lead to further investigate the torrent

      • PeoplesElbow@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        14 hours ago

        Oh no…I didn’t know this, on one hand now i need to run another scan, but on the other it could reveal something, the torrent has 500k+ files so there is still a gap. I will run the scraper again and do a new analysis in the next day or two.