One of the inventors of Siri, the original AI agent, wants you to “handle with care” when it comes to artificial intelligence. But are we becoming too cautious around AI in Europe and risking our future?

  • orclev@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    3
    ·
    17 hours ago

    Agentic AI is just a buzzword for letting AI do things without human supervision. It’s absolutely a recipe for disaster. You should never let AI do anything you can’t easily undo as it’s guaranteed to screw it up at least part of the time. When all it’s screwing up is telling you that glue would make an excellent topping for pizza that’s one thing, but when it’s emailing your boss that he’s a piece of crap that’s an entirely different scenario.

    • Quazatron@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      I agree with you. I don’t mind local AI searching the web for topics I’m interested in and providing me with news and interesting tidbits. I’m not OK with AI having any kind of permission to run executable code.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      5
      ·
      16 hours ago

      Agentic AI is just a buzzword for letting AI do things without human supervision

      No, it isn’t.

      As per IBM https://www.ibm.com/think/topics/agentic-ai

      Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.

      The key part being the last sentence.

      Its the idea of moving away from a monolithic (for simplicity’s sake) LLM into one where each “AI” serves a specific purpose. So imagine a case where you have one “AI” to parse your input text and two or three other “AI” to run different models based upon what use case your request falls into. The result is MUCH smaller models (that can often be colocated on the same physical GPU or even CPU) that are specialized rather than an Everything model that can search the internet, fail at doing math, and tell you you look super sexy in that minecraft hat.

      And… anyone who has ever done any software development (web or otherwise) can tell you: That is just (micro)services. Especially when so many of the “agents” aren’t actually LLMs and are just bare metal code or databases or what have you. Just like how any Senior engineer worth their salt can point out that isn’t fundamentally different than calling a package/library instead of rolling your own solution for every component.

      The idea of supervision remains the same. Some orgs care about it. Others don’t. Just like some orgs care about making maintainable code and others don’t. And one of the bigger buzz words these days is “human in the loop” to specifically provide supervision/training data.

      But yes, it is very much a buzzword.

      • Catoblepas@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        16 hours ago

        Hat on top of a hat technology. The underlying problems with LLMs remain unchanged, and “agentic AI” is basically a marketing term to make people think those problems are solved. I realize you probably know this, I’m just kvetching.

        • Auth@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          15 hours ago

          Not really. By breaking down the problem you can adjust the models to the task. There is a lot of work going into this stuff and there are ways to turn down the randomness to get more consistent outputs for simple tasks.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            7
            ·
            15 hours ago

            turn down the randomness to get more consistent outputs for simple tasks.

            This is a tricky one… if you can define good success/failure criteria, then the randomness coupled with an accurate measure of success, is how “AI” like Alpha Go learns to win games, really really well.

            In using AI to build computer programs and systems, if you have good tests for what “success” looks like, you’d rather have a fair amount of randomness in the algorithms trying to make things work because when they don’t and they fail, they end up stuck, out of ideas.

          • floquant@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            You’re both right imo. LLMs and every subsequent improvement are fundamentally ruined by marketing heads like oh so many things in the history of computing, so even if agentic AI is actually an improvement, it doesn’t matter because everyone is using it to do stupid fucking things.

            • Auth@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              Yeah like stringing 5 chatgpt’s together saying “you are scientist you are product lead engineer etc” is dumb but stringing together chatgpt into a coded tool into a vision model into a specific small time LLM is an interesting new way to build workflows for complex and dynamic tasks.

    • tidderuuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      17 hours ago

      The way I see Agentic AI is it’s just a dumber customer service agent that is ready and willing to be scammed and phished. Not my fault if these companies are too stupid to put in proper guardrails.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      I think there’s a difference between letting it do somethings and letting it finish something.

      It shouldn’t be the one clicking the send button because everything needs to be verified, but it’s fine to have it surf the internet or turn a request into a set number of tasks with a to-do list.

      Writing an email with it is a no-go for me though, I avoid it the moment it comes to actually communicating with someone. Using AI strikes me as patronizing.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      15 hours ago

      Mechanical key based door lock cylinders are “Agentic AI” - they decide whether or not to allow the tumbler to turn based on the key (code) inserted. They’re out there, in their billions around the world, deciding whether or not to allow people access through doorways WITHOUT HUMAN SUPERVISION!!! They can be easily hacked, they are not to be trusted!!! Furthermore, most key-lock users have no idea how the thing really works, they just stick the key in and try to turn it.