In light of the recent Crowdstrike crash revealing how weak points in IT infrastructure can have wide ranging effects, I figured this might be an interesting one.

The entirety of wikipedia is periodically uploaded here, along with many other useful wikis and How To websites (ex. iFixit tutorials and WikiHow): https://download.kiwix.org/zim

You select the archive you want, then the language and archive version (for example, you can get an archive with no pictures, to save on space). For the totality of the english wikipedia you’d select the “wikipedia_en_all_maxi_2024-01.zim”

The archives are packed as .zim files, which can be read with the Kiwix app completely offline.

I have several USBs I keep that have some of these archives along with the app installer. In the event of some major catastrophe I’d at least be able to access some potentially useful information. I have no stake in Kiwix, and don’t know if there are other alternative apps and schemes, just thought it was neat.

  • bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    127
    arrow-down
    8
    ·
    2 months ago

    The text version of Wikipedia*

    The images and other media are a hell of a lot more.

      • souperk@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        Probably a lot less, keep in mind that whenever it answers a question the whole model is traversed multiple times, going through multiple GBs is not possible in the matter of seconds the model answers.

        • Max@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 months ago

          I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.

          These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 months ago

      I mean, you can self-host your own local LLMs using something like Ollama. The performance will be bound by the disk space you have (the complexity of the model you’re able to store), and the performance of the CPU or GPU you are using to run it, but it does work just fine. Probably as good results as ChatGPT for most use cases.

      • Nooodel@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        We do this at work (lots of sensitive data that we don’t want Openai to capitalize on) and it works pretty well. Hosted locally, setup by a data security and privacy sensitive admin, who specifically runs the settings to not save any queries even on the server. Bit slower than chatgpt but not by much

  • Em Adespoton@lemmy.ca
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    10
    ·
    2 months ago

    Aside from the text clarification, this is also only the US version of Wikipedia.

    What worries me though is that most videos linked on Wikipedia are hosted on YouTube. That’s a pretty dangerous choke point.

  • Muffi@programming.dev
    link
    fedilink
    English
    arrow-up
    38
    ·
    2 months ago

    This saved my ass at my engineering chemistry exam (still a requirement, even for software engineers) where only offline tools were allowed. Love Kiwix!

  • Aatube@kbin.melroy.org
    link
    fedilink
    arrow-up
    36
    ·
    2 months ago

    DYK that Kiwix was actually created by Wikipedia? Back in the late 2000s there was this gigantic effort to select and improve a ton of articles to make an offline “Wikipedia 1.0” release. The only remains of that effort are Kiwix, periodic backups, and an incredibly useful article-rating system.

      • Aatube@kbin.melroy.org
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        2 months ago
        1. There is a set of criteria to rate an article B, C, Start or Stub. These are called classes. Similarly, articles can be rated to be of 1 of 4 importance values to a particular WikiProject.
        2. There’s a banner on every article’s talk page. Any editor can change an article’s rating between one of the above classes boldly; if a revert happens, they discuss it according to the criteria.
        3. Some WikiProjects have their own criteria for rating articles. Some of them even have process to make an article A-class.
        4. Before this system, Wikipedia already had processes to make an article a Good Article or Featured article.
        • With GAs, a nominator should put a candidate onto backlog. Later, a reviewer will scrutinize the article according to criteria. Often, the reviewer asks the nominator to fix quite a bit of issues. If these issues are fixed promptly, or the reviewer thinks that there are only nitpicks, the article passes. If they aren’t fixed in a week or the reviewer thinks that there are major problems, the article fails.
          • As with other processes, the nominator and reviewer can be anyone, though reviewers are usually experienced.
        • With FAs, a nominator brings the candidate to a noticeaboard. Editors there then come to a consensus about whether the article should pass.
        • Both processes display a badge directly on passed articles.
        • Both processes have an associated re-review process where editors come to a consensus whether the article should fail if it were nominated today
        • There’s also an informal process called “peer review”, where someone just puts an article at a noticeable and anyone can comment about its quality.
        1. Articles are automatically sorted into categories by their rating and importance. Editors usually look at these to decide which articles to focus on nowadays.
    • NewAgeOldPerson@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      I couldn’t afford to donate for a long time but I used it near daily. So now I do monthly, probably larger than average, contribution to make up for sibs from other cribs that can’t afford it. Pay it forward is indeed a golden rule.

  • Fenrisulfir@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    Is there a git repo for it or do I have to redownload the whole thing to do an update?

  • ohwhatfollyisman@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 months ago

    i remember a time when it was only 2gb for all of wikipedia. usain bolt had just burst onto the world stage at the time.

    • rickyrigatoni
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 months ago

      And by now he’s exited the solar system at incomprehensible speeds.

  • clearedtoland@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    I know there are a few companies working on DNA storage. From the comment below about the entirety of Wikipedia and Wiki Commons, I’d say that’d be a pretty practical thing to store.

    Here’s the wiki article about it.

    • retrospectology@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Download the kiwix app for whatever OS you’re using, then go into Kiwix and click on the folder icon in the app and navigate to where the .zim file you downloaded is located. If you click it it should automatically pop-up and be viewable.

      If you did that and it’s still failing, is it giving you a specific error or anything?

  • Don_Dickle@piefed.social
    link
    fedilink
    Afaraf
    arrow-up
    4
    arrow-down
    15
    ·
    2 months ago

    I am currently reading on terrorists while in the states. But something tells me I will get my IP banning me. But I have read a shitton and I highly doubt its just 100gb. Otherwise you would see it more on piracy sites.