• @[email protected]
        link
        fedilink
        English
        524 days ago

        yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

        • @[email protected]
          link
          fedilink
          English
          323 days ago

          Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

          • @[email protected]
            link
            fedilink
            English
            423 days ago

            I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.

            I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.

            Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.

            • @[email protected]
              link
              fedilink
              English
              323 days ago

              I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.

              • @[email protected]
                link
                fedilink
                English
                323 days ago

                I was 0/6 on various trials of AI for Rust over the past 6 months, then I caught a success. Turns out, I was asking it to use a difficult library - I can’t make the thing I want work in that library either (library docs say it’s possible, but…) when I posed a more open ended request without specifying the library to use, it succeeded - after a fashion. It will give you code with cargo build errors, I copy-paste the error back to it like “address: <pasted error message>” and a bit more than half of the time it is able to respond with a working fix.

                • 𝕛𝕨𝕞-𝕕𝕖𝕧
                  link
                  fedilink
                  English
                  123 days ago

                  i find that rust’s architecture and design decisions give the LLM quite good guardrails and kind of keep it from doing anything too wonky. the issue arises in cases like these where the rust ecosystem is quite young and documentation/instruction can be poor, even for a human developer.

                  i think rust actually is quite well suited to agentic development workflows, it just needs to mature more.

                  • @[email protected]
                    link
                    fedilink
                    English
                    223 days ago

                    i think rust actually is quite well suited to agentic development workflows, it just needs to mature more.

                    I agree. The agents also need to mature more to handle multi-level structures - work on a collection of smaller modules to get a larger system with more functionality. I can see the path forward for those tools, but the ones I have access to definitely aren’t there yet.

          • @[email protected]
            link
            fedilink
            English
            423 days ago

            The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.

            • @[email protected]
              link
              fedilink
              English
              323 days ago

              Very fair comment. In my experience even increasing the temperature you get stuck in local minimums

              I was just trying to illustrate how 70% failure rates can still be useful.

                • @[email protected]
                  link
                  fedilink
                  English
                  2
                  edit-2
                  23 days ago

                  No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.

                  • Log in | Sign up
                    link
                    fedilink
                    English
                    223 days ago

                    Ah, my bad, you’re right, for being consistently correct, I should have done 0.3^10=0.0000059049

                    so the chances of it being right ten times in a row are less than one thousandth of a percent.

                    No wonder I couldn’t get it to summarise my list of data right and it was always lying by the 7th row.

                  • 𝕛𝕨𝕞-𝕕𝕖𝕧
                    link
                    fedilink
                    English
                    123 days ago

                    don’t you dare understand the explicitly obvious reasons this technology can be useful and the essential differences between P and NP problems. why won’t you be angry >:(

        • @[email protected]
          link
          fedilink
          English
          -6
          edit-2
          24 days ago

          Less broadly useful than 20 tons of mixed texture human shit, and more ecologically devastatimg.

          • @[email protected]
            link
            fedilink
            English
            424 days ago

            Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.

            • @[email protected]
              link
              fedilink
              English
              -3
              edit-2
              24 days ago

              Its not a magical 30%, factors apply. It’s not even a mind that thinks and just isnt very good.

              This isnt like a magical dice that gives you truth on a 5 or a 6, and lies on 1,2,3,7, and for.

              This is a (very complicated very large) language or other data graph that programmatically identifies an average. 30% of the time-according to one potempkin-ass demonstration. Which means the more possible that is, the easier it is to either use a simpler cheaper tool that will give you a better more reliable answer much faster.

              And 20 tons of human shit has uses! If you know its providence, there’s all sorts of population level public health surveillance you can do to get ahead of disease trends! Its also got some good agricultural stuff in it-phosphorous and stuff, if you can extract it.

              Stop. Just please fucking stop glazing these NERVE-ass fascist shit-goblins.

              • @[email protected]
                link
                fedilink
                English
                424 days ago

                I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.

                IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.

                • @[email protected]
                  link
                  fedilink
                  English
                  -3
                  edit-2
                  23 days ago

                  It’s absolutely dangerous but it doesnt have to work even a little to do damage; hell, it already has. Your thing just makes it sound much more capable than it is. And it is not.

                  Also, it’s not AI.

                  Edit: and in a comment replying to this one, one of your fellow fanboys proved

                  everyone knows how they work

                  Wrong

        • @[email protected]
          link
          fedilink
          English
          123 days ago

          Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.

          • @[email protected]
            link
            fedilink
            English
            -323 days ago

            The comparison is about the correctness of their work.

            Their lives have nothing to do with it.

            • Log in | Sign up
              link
              fedilink
              English
              223 days ago

              Human lives are the most important thing of all. Profits are irrelevant compared to human lives. I get that that’s not how Besos sees the world, but he’s a monstrous outlier.

            • @[email protected]
              link
              fedilink
              English
              023 days ago

              So, first, bad comparison.

              Second: if that’s the equivalent, why not do the one that makes tge wealthy let a few pennies go to fall on actual people?