• Chozo
    link
    fedilink
    20810 months ago

    If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.

      • @[email protected]
        link
        fedilink
        English
        5910 months ago

        Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.

          • @[email protected]
            link
            fedilink
            English
            910 months ago

            Well, yeah, but the point is, ChatGPT didn’t “remember and then leak” anything, the web service exposed people’s chat history.

            • @[email protected]
              link
              fedilink
              English
              210 months ago

              Well, that depends. Do you mean gpt the specific chunk of lln code? Or do you mean gpt the website and service?

              Because while the nitpicking details matter to the programmers fixing it, how much does that distinction matter to you or I, the laymen using the site?

      • @[email protected]
        link
        fedilink
        English
        710 months ago

        A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.

        • @[email protected]
          link
          fedilink
          English
          610 months ago

          All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.

        • Farid
          link
          fedilink
          English
          5
          edit-2
          10 months ago

          It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.

          • @[email protected]
            link
            fedilink
            English
            6
            edit-2
            10 months ago

            I’m familiar, it’s just easiest for the layman to consider the model having “memory” as historical search is a lot like it at arm’s length

      • konalt
        link
        fedilink
        English
        410 months ago

        I’m sorry, but as an AI language model, I cannot tell you about the effectiveness of “*******” as a password.

  • @[email protected]
    link
    fedilink
    English
    13410 months ago

    ChatGPT doesn’t leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What’s up with the current trend of saying AI did this and that while the AI really didn’t?

    • DreamButt
      link
      fedilink
      English
      4110 months ago

      Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed

  • @[email protected]
    link
    fedilink
    English
    10810 months ago

    So what actually happened seems to be this.

    • a user was exposed to another users conversation.

    thats a big ooof and really shouldn’t happen

    • the conversations that where exposed contained sensitive userinformation

    unresponsible user error, everyone and their mom should know better by now

    • @[email protected]
      link
      fedilink
      English
      610 months ago

      Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it’s the users who are idiots?

      Except it’s never just about that. Every comment has to make it known that they would never allow that to happen to them because they’re super smart. It’s honestly one of the most self-righteous, tone deaf takes I see on here.

      • @[email protected]
        link
        fedilink
        English
        1010 months ago

        I don’t support calling people idiots, but here’s that: we can’t control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.

      • @[email protected]
        link
        fedilink
        English
        710 months ago

        Because that’s what the last several reported “breaches” have been. There’s been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.

        In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.

      • @[email protected]
        link
        fedilink
        English
        510 months ago

        Data loss or leaks may not be the end user’s fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.

        The only point of access to my information that I can control completely is what I do with it. If someone says “hey, don’t do that with your password” they’re saying it’s a potential safety issue. You’re putting control of your account in the hands of some entity you don’t know. If it’s revealed, well, it’s THEIR fault, but you also goofed and should take responsibility for it.

      • @[email protected]
        link
        fedilink
        English
        410 months ago

        Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we’re all tired of hearing this story over and over while the public learns nothing.

        • @[email protected]
          link
          fedilink
          English
          0
          edit-2
          10 months ago

          Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.

          • @[email protected]
            link
            fedilink
            English
            310 months ago

            Well I’d never use the term to describe a person–it’s unnecessarily loaded. Ignorant, naive, etc might be better.

            • @[email protected]
              link
              fedilink
              English
              2
              edit-2
              10 months ago

              Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        10 months ago

        To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn’t absolve responsibility.

        I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress

        Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.

    • @[email protected]
      link
      fedilink
      English
      0
      edit-2
      10 months ago

      Maybe it has something to do with being retrained/finetuned on conversations its having

  • @[email protected]
    link
    fedilink
    English
    45
    edit-2
    10 months ago

    They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).

    This sounds more like a huge fuckup with the site, not the AI itself.

    Edit: A depressing amount of people commenting here obviously didn’t read the article…

    • 𝚝𝚛𝚔
      link
      fedilink
      English
      1410 months ago

      Edit: A depressing amount of people commenting here obviously didn’t read the article…

      Every time

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      To be fair the article headline is a straight up lie. OpenAI leaked it by sending a user someone else’s chat history, ChatGPT didn’t leak anything.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        The ChatGPT service leaked the data. Maybe that can be attributed to the OpenAI organization that owns and operates ChatGPT, too, but it’s not “a straight up lie” to say that ChatGPT leaked information, when ChatGPT is the name of both the service and the LLM that powers the interesting part of that service.

  • HiramFromTheChi
    link
    fedilink
    English
    4010 months ago

    It also literally says to not input sensitive data…

    This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don’t care and are still gonna feed the machine everything regardless, then that’s a human problem.

      • @[email protected]
        link
        fedilink
        English
        1110 months ago

        People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we’re saying clearly, this exact thing.

        I’m like… I don’t want your password. I never want your password. I barely know what my password is. I use a password manager.

        IT should never need your password. Your boss and work shouldn’t need it. I can log in as you without it most of the time. I don’t, because I couldn’t give any less of a fuck what the hell you’re doing, but I can if I need to…

        If your IT person knows what they’re doing, most of the time for routine stuff, you shouldn’t really see them working, things just get fixed.

        Gah.

        • @[email protected]
          link
          fedilink
          English
          610 months ago

          Lmao my IT guy asks for our passwords to certain things on an annual basis, stores them as plain text in a fucking email.

          First Time he did it I was like “uhh, not supposed to share that?” And he just insisted he needed it. Whatever, he wants to log in to my Autodesk account he’s free to. Not sure how much damage he could do.

          • @[email protected]
            link
            fedilink
            English
            310 months ago

            That’s the problem, right there.

            Companies either don’t allow for IT oversight of accounts or charge more for accounts that can be overseen. Companies don’t want to pay the extra, if that’s even an option on the platform, so some passwords end up being fairly common knowledge among the IT staff.

            As for your computer login? No thanks. Microsoft has been built pretty much from the ground up to be administratable. I can get into your files, check what you’re running, extract data, modify your settings, adjust just about anything I want if I know what I’m doing. All without you realizing that I’ve done anything.

            Companies like Autodesk really don’t have that kind of oversight available for accounts that they’re willing to provide to an administrator that’s managing your access. I should be able to list the license that you’ve been given, download whatever software that license is associated to, and purchase/apply new licensing, all from a central control panel for the company under my own administrative user account for their site, whether I’m assigned any software/licensing or not. They don’t. It makes my job very complicated when that’s the case.

            In the event you brick your computer (or lose it, or destroy it, or something… Whether intentional or not), I sometimes need your password to go download your software and install it, then apply your license to it, so that it’s ready to go when you get your system back. You might lose any customizations, but you’ll at least have the tools to do the job.

            On the flip side, an example of good access is with Microsoft 365. You’re having a problem finding an email, I can trace the message in the control panel, get it’s unique ID, set your mailbox to provide myself full access to see it, then switch mailboxes to yours, while I’m still signed in as myself, find the message you accidentally moved into the draft messages folder and move it back to your inbox. Then remove my access and the message just appears in your inbox without you doing anything. I didn’t need to talk to you, I didn’t need your password… Nothing. No interaction, just fixed.

            There’s hundreds of examples of both good and bad administrative access, and it varies dramatically depending on the software vendor. In a perfect world I would have tools like what I get from exchange online for all the software and tools you use. Fact is, most companies are just too lazy to do it, instead of paying the developers to do things well, they’d rather give the money to their shareholders and let us IT folks suffer. They don’t give a shit about us.

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        10 months ago

        Stupid is too harsh. They could be as intelegent as you or me. but… they are told propaganda/marketing, the thing is made to hide its rough edges and the hype from the propaganda machine puts people in a hazey mindset where its hard to think.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        I had a student graduate recently who told me that he thought that technology just worked before joining my team of computer lab managers. I suspect that people think that tech in general JUST GOES.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            10 months ago

            No need for personal attacks. Since you won’t define it I will:

            The ability to acquire and apply knowledge and skills (from Oxford Languages)

            I would argue this applies to ChatGPT. ChatGPT exists under the hood as a neutral network, and is clearly capable of acquiring knowledge during training. And ChatGPT is also clearly capable of applying that knowledge in producing answers to questions or novel solutions to problems.

            Based on this definition, I would argue that ChatGPT is intelligent. Whether ChatGPT is sentient or not is a very different question. I would argue not, but again, that depends on the definition of sentience.

  • @[email protected]
    link
    fedilink
    English
    3210 months ago

    And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).

    The implications are terrifying. Nudes, private conversations, passwords, identifying information like your home address, etc. There’s a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.

    We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.

    AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I’m not counting on them to regulate or even understand any of this.

    • @[email protected]
      link
      fedilink
      English
      510 months ago

      Fuck google but i do consider it an incompetence on openai’s part that conversations get exposed. That stuff really shouldn’t be possible with proper build software.

      If any personal information gets exposed by google ai its gonna be for their own analytics and their third partners. No one else.

  • @[email protected]
    link
    fedilink
    English
    2310 months ago

    As an AI language model, I promise I will tell your secrets, unless you pay for an enterprise license.

    • @[email protected]
      link
      fedilink
      English
      3510 months ago

      You could just watch what you input into it lol ChatGPT is a pretty good tool to have in the toolkit and like any tool there’s warnings and cautions on its use.

      • @[email protected]
        link
        fedilink
        English
        -210 months ago

        It’s an amazing tool. I think it’s funny how many people fight it tooth and nail. I like to think they’re the kind of person who refused to use spell check, or the touch tone phone.

        • @[email protected]
          link
          fedilink
          English
          3510 months ago

          There are very valid philosophical and ethical reasons not to use it. We’re not just being luddites for the hell of it. In many cases, we’re engineers and scientists with interest, experience, or expertise in neural nets and LLMs ourselves, and we don’t like how fast and loose (in a lot of really, really important ways) all these big companies are playing it with the training datasets, nor how they’re actively disregarding any sort of legal or ethical responsibility around the technology writ large.

            • @[email protected]
              link
              fedilink
              English
              210 months ago

              Uh, no. Why would that be the case? Every technology has unique upsides and downsides and the downsides of this one are not being handled correctly and are in fact being exacerbated.

      • @[email protected]
        link
        fedilink
        English
        610 months ago

        Absolutely. Host your own. Like the other person said, Hugging Face and look upon llama.cpp as well, vicuna wizard uncensored probably spelled that wrong

        • @[email protected]
          link
          fedilink
          English
          210 months ago

          I’m sure the average person is totally capable of doing that, or even knowing about it /s. Jfc.

      • @[email protected]
        link
        fedilink
        English
        510 months ago

        I finally found some offline ones jan.ai and koboldcpp you download the GGUF model and run everything from your own pc, it just takes a lot of CPU and GPU for it to work acceptable, my setup can’t really manage much more than a model with 7B.

    • zeluko
      link
      fedilink
      410 months ago

      To be fair, they are talking about the OpenAI end user version, not the models themselves.
      Its still sketchy to send your data willingly to them and hope because you pay per request, its not getting tracked and saved.
      My company is deep into microsoft, so we all get Bing Chat Enterprise.
      Microsoft says it doesnt store anything and runs on separate systems… i guess with a company-offer they are more likely to put more protections in place because a breach would mean real consequences.
      (opposed to a breach with end-users, most of which dont care or would ever go through the legal trouble)

  • @[email protected]
    link
    fedilink
    English
    2210 months ago

    Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that’s what they say.

  • @[email protected]
    link
    fedilink
    English
    1410 months ago

    Who knew everyone had the same password as me? I always thought I was the only ‘hunter2’ out there!

    • Meowing Thing
      link
      fedilink
      English
      610 months ago

      Wow! Lemmy is now blurring passwords? It only shows asterisks to me!

      • @[email protected]
        link
        fedilink
        English
        110 months ago

        Me too! I see

        “Who knew everyone had the same password as me? I always thought I was the only ‘*******’ out there!”

        Lemmy rocks!

    • @[email protected]
      link
      fedilink
      English
      810 months ago

      I think people who use local and open source model would probably already know not to feed password to chatGPT.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      8 months ago

      I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. About 5GB ram and about 3GB filesize for the smaller sized ollama-unsensored.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          10 months ago

          Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.

          • @[email protected]
            link
            fedilink
            English
            110 months ago

            IDK what you’ve read, but I have 24GB and can use Dreambooth and fine-tune Mistral no problem. RAM is only required to load the model briefly before it’s passed to VRAM iirc, and that’s the main deal, you need 8GB VRAM as an absolute minimum, even my 24GB VRAM is often not enough for some high end stuff.

            Plus RAM is actually really cheap compared to a GPU. Remember it doesn’t have to be super fancy RAM either, DDR3 is fine if you’re not gaming on a like a Ryzen or something modern

      • ???
        link
        fedilink
        English
        810 months ago

        As a general rule of thumb, do not do this.

      • @[email protected]
        link
        fedilink
        English
        -210 months ago

        Using an LLM as a password generator. The fuck? That’s like using the Sistine Chapel as inspiration for a post card.