I saw this post and I was curious what was out there.

https://neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

      • @[email protected]
        link
        fedilink
        English
        22 hours ago

        In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.

    • OtterOP
      link
      fedilink
      English
      2511 hours ago

      There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?

    • @[email protected]
      link
      fedilink
      English
      012 hours ago

      Yes. This isn’t something you want your own machines to be doing if something else is already doing it.

      • @jcgA
        link
        English
        1411 hours ago

        But then who backs up the backups?

        • abff08f4813c
          link
          fedilink
          211 hours ago

          I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).

      • Deebster
        link
        fedilink
        English
        711 hours ago

        Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.