• @[email protected]
      link
      fedilink
      English
      810 months ago

      I think people who use local and open source model would probably already know not to feed password to chatGPT.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      8 months ago

      I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. About 5GB ram and about 3GB filesize for the smaller sized ollama-unsensored.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          10 months ago

          Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.

          • @[email protected]
            link
            fedilink
            English
            110 months ago

            IDK what you’ve read, but I have 24GB and can use Dreambooth and fine-tune Mistral no problem. RAM is only required to load the model briefly before it’s passed to VRAM iirc, and that’s the main deal, you need 8GB VRAM as an absolute minimum, even my 24GB VRAM is often not enough for some high end stuff.

            Plus RAM is actually really cheap compared to a GPU. Remember it doesn’t have to be super fancy RAM either, DDR3 is fine if you’re not gaming on a like a Ryzen or something modern