• @[email protected]
    link
    fedilink
    372 days ago

    This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.

  • Hilarious and true.

    last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

    What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.

    • @[email protected]
      link
      fedilink
      272 days ago

      I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)

      But my frontends have never looked better.

    • @[email protected]
      link
      fedilink
      English
      592 days ago

      100% this. I’ve gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I’ve probably done this dozens of times by now. No bites yet. All those millions wasted…

  • RedSnt 👓♂️🖥️
    link
    fedilink
    522 days ago

    Yes, yes there are weird people out there. That’s the whole point of having humans able to understand the code be able to correct it.

  • @[email protected]
    link
    fedilink
    582 days ago

    The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

    • @[email protected]
      link
      fedilink
      15
      edit-2
      2 days ago

      Plenty of good programmers use AI extensively while working. Me included.

      Mostly as an advance autocomplete, template builder or documentation parser.

      You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.

      Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.

      • @[email protected]
        link
        fedilink
        81 day ago

        I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
        I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.

        • @[email protected]
          link
          fedilink
          2
          edit-2
          1 day ago

          That’s why you use unit test and integration test.

          I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.

          Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.

          As I said, just a tool like many other before it.

          I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.

          It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.

          I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.

          Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.

        • @[email protected]
          link
          fedilink
          English
          122 hours ago

          There is an exception to this I think. I don’t make ai write much, but it is convenient to give it a simple Java class and say “write a tostring” and have it spit out something usable.

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      2 days ago

      Depending on what it is you’re trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that’s complete, getting suggestions from an LLM that’s been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it’s full of shit is part of the exercise.

      It’s like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn’t as rigid as the available tutorials, and though it’s prone to hallucination and not as knowledgeable as support channel regulars, it’s also a lot more patient in many cases and doesn’t have its own life that it needs to go live.

      Decent learning tool if you’re ready to check what it’s doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it’ll work, though? That’s not going to go well at all.

    • @[email protected]
      link
      fedilink
      English
      -52 days ago

      It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

      • @[email protected]
        link
        fedilink
        142 days ago

        That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.

        • @[email protected]
          link
          fedilink
          English
          -102 days ago

          I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

            • @[email protected]
              link
              fedilink
              -22 days ago

              I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

              None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

              • @[email protected]
                link
                fedilink
                English
                32 days ago

                Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.

              • @[email protected]
                link
                fedilink
                English
                11 day ago

                If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

                You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

                The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

                • @[email protected]
                  link
                  fedilink
                  21 day ago

                  We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.

                  And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

                  Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks it [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

  • @[email protected]
    link
    fedilink
    432 days ago

    An otherwise meh article concluded with “It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.”

    Much as we want to point and laugh - this is not some loon’s fantasy. This is happening. Some dingus told spicy autocomplete ‘make me a database!’ and it did. It’s surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan’s law.

    This tech is borderline miraculous, even if it’s primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      2 days ago

      It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.

      The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is “better” than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that’s a nightmare to maintain.

      Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers’ asses, AI does that even when it make no sense not to tell the truth because it doesn’t really know what “the truth” is and why other people would want it.

      Will AI eventually catch up? Almost certainly, but we’re nowhere close to that right now. Currently it’s less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.

      I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we’re going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.

      • @[email protected]
        link
        fedilink
        22 days ago

        Self-learned programming, started building stuff on my own, and then went through an actual computer science program.

        Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I’d never heard of modulo. Or functions.

        Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It’s not even “compileable pseudocode.” It’s high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to “implement MP4 encoding” and expect that to Just Work.

        But it’s teaching people to write the comments first.

        we’re nowhere close to that right now.

        The distance from here to “oh shit” is shorter than we’d prefer. This tech works like a joke. “Chain of thought” apparently means telling the robot to act smarter… and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn’t look like Hatsune Miku. If it’s stupid, but it works… it’s still stupid. But it works.

        Someone’s gonna prompt “Write like Donald Knuth” and the robot’s gonna go, “Oh, you wanted good code? Why didn’t you say so.”

    • @[email protected]
      link
      fedilink
      7
      edit-2
      2 days ago

      Yeah, I’ve been using it heavily. While someone without technical knowledge will surely allow AI to build a highly insecure app, people with more technological knowledge are going to propel things to a level where the less tech savvy will have fewer and fewer pitfalls to fall into.

      For the past two months, I’ve been leveraging AI to build a CUE system that takes a user desire (e.g. “i want to deploy a system with an app that uses a database and a message queue” expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.

      I’m trying to be fully shift-left about it. So, even if the user’s configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.

      The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I’ve already abstracted the component’s expected manifest fields using CUE. So, I’d be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.

      The idea is I could use this system to launch my own social media app, since I’ve been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.

      Of course, I intend on open sourcing the CUE once it’s developed enough to get things off the ground. I’d really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.

      Interested to know if anyone has done this type of project in the past. Definitely wouldn’t have been able to move at nearly this speed without AI.

        • @[email protected]
          link
          fedilink
          1
          edit-2
          2 days ago

          I’ve never heard of this before, but you’re right that it sounds very much like what I’m doing. Thank you! Definitely going to research this topic thoroughly now to make sure I’m not reinventing the wheel.

          Based on the sections in that link, I wondered if the MASD project was more geared toward the software dev side or devops. I asked Google and got this AI response:

          “MAD” (Modern Application Development) services, often used in the context of software development, encompass a broader approach that includes DevOps principles and tools, focusing on rapid innovation and cloud-native architectures, rather than solely on systems development.

          So (if accurate), it sounds like all the modernized automation of CI/CD, IaC, and GitOps that I know and love are already engaging in MAD philosophy. And what I’m doing is really just providing the last puzzle piece to fully automate stack architecting. I’m guessing the reason it doesn’t already exist is because a lot of the open source tools I’m relying on to do the heavy lifting inside kubernetes are themselves relatively new. So, hopefully this all means I’m not wasting my time lol

          • @[email protected]
            link
            fedilink
            English
            22 days ago

            AFAICT MASD is an iteration on MDE which incorporates parts of MAD but not in a direct fashion.

            Lots of acronyms there.

            These types of systems do exist, they just aren’t mainstream because there hasn’t been a version of them that could be easily used for general development outside of the specific mid-level niches they are built in.

            I think it’s the goal, but I’ve not seen anything come close yet.

            Admittedly I’m not an authority so it may just be me missing the important things.

            • @[email protected]
              link
              fedilink
              12 days ago

              Thanks for the info. When I searched MASD, it told me instead about MAD, so it’s good to know how they’re differentiated.

              This whole idea comes from working in a shop where most of their DevSecOps practices were fantastic, but we were maintaining fleets of Helm charts (picture the same Helm override sent to lots of different places with slightly different configuration). The unique values for each deployment were buried “somewhere” in all of these very lengthy values.yaml override files. Basically had to did into thousands of lines of code whenever you didn’t know off-hand how a deployment was configured.

              I think when you’re in the thick of a job, people tend to just do what gets the job done, even if it means you’re going to have to do it again in two weeks. We want to automate, but it becomes a battle between custom-fitting and generalization. With the tradeoff being that generalization takes a lot of time and effort to do correctly.

              So, I think plenty of places are “kind of” at this level where they might use CUE to generalize but tend to modify the CUE for each use case individually. But many DevOps teams I suspect aren’t even using CUE, they’re still modifying raw yaml. I think of yaml like plumbing. It’s very important, but best not exposed for manual modification unless necessary. Mostly I just see CUE used to construct and deliver Helm/kubernetes on the cluster, in tools like KubeVela and Radius. This is great for overriding complex Helm manifests with a simple Application .yaml, but the missing niche I’m trying to fill is a tool that provides the connections between different tools and constrains the overall structure of a DevSecOps stack.

              I’d imagine any company with a team who has solved this problem is keeping it proprietary since it represents a pretty big advantage at the moment. But I think it’s just as likely that a project like this requires such a heavy lift before seeing any gain that most businesses simply aren’t focusing on it.

              • @[email protected]
                link
                fedilink
                English
                12 days ago

                My experiences are similar to yours, though less k8’s focused and more general DevSecOps.

                it becomes a battle between custom-fitting and generalisation.

                This is mentioned in the link as “Barely General Enough” I’m not sure i fully subscribe to that specific interpretation but the trade off between generalisation and specialisation is certainly a point of contention in all but the smallest dev houses (assuming they are not just cranking hard coded one-off solutions).

                I dislike the yaml syntax, in the same way i dislike python, but it is pervasive in the industry at the moment so you work with that you have.

                I don’t think yaml is the issue as much as the uncontrolled nature of the usage.

                You’d have the same issue with any format as flexible to interpretation that was being created/edited by hand.

                As in, if the yaml were generated and used automatically as part of a chain i don’t think it’d be an issue, but it is not nearly prescriptive enough to produce the high level kind of model definitions further up the requirements stack.

                note: i’m not saying it couldn’t be done in yaml, i’m saying that it would be a massive effort to shoehorn what was needed into a structure that wasn’t designed for that kind of thing

                Which then brings use back to the generalisation vs specialisation argument, do you create a hyper-specific dsl that allows you only to define things that will work within the boundaries of what you want, does that mean it can only work in those boundaries or do you introduce more general definitions and the complexity that comes with that.

                Whether or not the solution is another layer of abstraction into a different format or something else entirely i’m not sure, but i am sure that raw yaml isn’t it.

                • @[email protected]
                  link
                  fedilink
                  12 days ago

                  Yes, I think yaml’s biggest strength is also its built-in flaw: its flexibility. Yaml as a data structure is built to be so open-ended that it can be no surprise when every component written in Go and using Yaml as a data structure builds their spec in a slightly different way, even when performing the exact same functions.

                  That’s why I yearned for something like CUE and was elated to discover it. CUE provides the control that yaml by its very nature cannot enforce. I can create CUE that defines the yaml structure in general so anything my system builds is valid yaml. And I can create a constraint which builds off of that and defines the structure of a valid kubernetes manifest. Then, when I go to define the CUE that builds up a KubeVela app I can base its constraints on those k8s constraints and add only KubeVela-specific rules.

                  Then I have modules of other components that could be defined as KubeVela Applications on the cluster but I define their constraints agnostically and merge the constraint sets together to create the final yaml in proper KubeVela Application format. And if the component needs to talk to another component, I standardize the syntax of the shared function and then link that function up to whatever tool is currently in use for that purpose.

                  I think it’s a good point that overgeneralization can and does occur and my “one size fits all” approach might not actually fit all. But I’m hoping that if I finish this tool and shop it to a place that thinks it’s overkill, I can just have them tell me which parts they want generalized and define a function to export a subset of my CUE for their needs. And in that scenario, I would flip and become a big proponent of “Just General Enough”. Because then, they can have the streamlined fit-for-purpose system they desire and I can have the satisfaction of not having to do the same work over and over again.

                  But the my fear about going down that road is that it might be less of an export of a subset of code and more of building yet another system that can MAD-style generate my whole CUE system for whatever level of generalization I want. As you say, it just becomes another abstraction layer. Can’t say I’m quite ready to go that far 😅

    • @[email protected]
      link
      fedilink
      English
      9
      edit-2
      2 days ago

      This industry also spends most of it’s money either changing things that don’t need to change (we optimized the right click menu to remove this item, mostly to fuck your muscle memory) or to avoid changing things (rather than implementing 2fa, banks have implemented 58372658 distinct algorithms for detecting things that might be fraud).

      If you’re just talking about enabling small scale innovation you’re probably right, but if you’re talking about the industry as a whole I think you need to look at what people in industry are actually spending their time on.

      it’s not code.

  • @[email protected]
    link
    fedilink
    English
    21
    edit-2
    2 days ago

    ITT: “Haha, yah AI makes shitty insecure code!”

    <mad scrabbling in background to review all the code committed in the last year>

  • @[email protected]
    link
    fedilink
    6
    edit-2
    2 days ago

    If I were leojr94, I’d be mad as hell about this impersonator soiling the good name of leojr94—most users probably don’t even notice the underscore.

    • @[email protected]
      link
      fedilink
      132 days ago

      Guy who doesn’t know how to write software uses GenAI to make software that he then puts up for sale, and brags about not knowing how to write software.

      People buy his software and, intentionally or not, start poking holes in it by using it in ways neither he nor the GenAI anticipated. Guy panics because he has no clue how to fix it.

    • @[email protected]
      link
      fedilink
      122 days ago

      Man uses AI to make software. Man learns hard way that AI doesn’t care about stuff like security.

  • @[email protected]
    link
    fedilink
    2802 days ago

    Bonus points if the attackers use ai to script their attacks, too. We can fully automate the SaaS cycle!

    • 1024_Kibibytes
      link
      fedilink
      1092 days ago

      That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.

      • @[email protected]
        link
        fedilink
        English
        172 days ago

        The Internet will continue to function just fine, just as it has for 50 years. It’s the World Wide Web that is on fire. Pretty much has been since a bunch of people who don’t understand what Web 2.0 means decided they were going to start doing “Web 3.0” stuff.

        • @[email protected]
          link
          fedilink
          English
          172 days ago

          The Internet will continue to function just fine, just as it has for 50 years.

          Sounds of intercontinental data cables being sliced

        • @[email protected]
          link
          fedilink
          252 days ago

          Not only internet. Soon everybody will use AI for everything. Lawyers will use AI in court on both sides. AI will fight against AI.

          • @[email protected]
            link
            fedilink
            26
            edit-2
            2 days ago

            I was at a coffee shop the other day and 2 lawyers were discussing how they were doing stuff with ai that they didn’t know anything about and then just send to their clients.

            That shit scared the hell out of me.

            And everything will just keep getting worse with more and more common folk eating the hype and brainwash using these highly incorrect tools in all levels of our society everyday to make decisions about things they have no idea about.

            • @[email protected]
              link
              fedilink
              English
              162 days ago

              I’m aware of an effort to get LLM AI to summarize medical reports for doctors.

              Very disturbing.

              The people driving it where I work tend to be the people who know the least about how computers work.

          • @[email protected]
            link
            fedilink
            92 days ago

            It was a time of desolation, chaos, and uncertainty. Brother pitted against brother. Babies having babies.

            Then one day, from the right side of the screen, came a man. A man with a plastic rectangle.

      • @[email protected]
        link
        fedilink
        32 days ago

        That would only happen if we give power to our ai assistants to buy things on our behalf, and manage our budgets. They will decide among themselves who needs what and the money will flow to billionaires pockets without any human intervention. If humans go far enough, not even rich people would be rich, as trust funds, stock portfolios would operate under ai. If the ai achieves singularity with that level of control, we are all basically in spectator mode.

  • @[email protected]
    link
    fedilink
    English
    1632 days ago

    AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.

      • @[email protected]
        link
        fedilink
        102 days ago

        I was going to post a note about typewriters, allegedly from Tom Hanks, which I saw years and years ago; but I can’t find it.

        Turns out there’s a lot of Tom Hanks typewriter content out there.

        • 3DMVR
          link
          fedilink
          English
          52 days ago

          He donated his to my hs randomly, it was supposed to goto the valedictorian but the school kept it lmao, it was so funny because they showed everyone a video where he says not to keep the typewriter and its for a student

      • @[email protected]
        link
        fedilink
        422 days ago

        But then they’d have a dev team who wrote the code and therefore knows how it works.

        In this case, the hackers might understand the code better than the “author” because they’ve been working in it longer.

      • @[email protected]
        link
        fedilink
        English
        42 days ago

        True, any software can be vulnerable to attack.

        but the difference is a technical team of software developers can mitigate an attack and patch it. This guy has no tech support than the AI that sold him the faulty code that likely assumed he did the proper hardening of his environment (which he did not).

        Openly admitting you programmed anything with AI only is admitting you haven’t done the basic steps to protecting yourself or your customers.

  • @[email protected]
    link
    fedilink
    1052 days ago

    Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.

    Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.