• TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    5 days ago

    Which is why it should only be used for art.

    I don’t believe the billionaire dream of robot slaves is going to work nearly as soon as they’re promising each other. They want us to buy into the dream without telling us that we’d never be able to afford a personal robot, they aren’t for us. They don’t want us to have them. The poors are slaves, they don’t get slaves. It’s all lies, we’re not part of the future we’re building for them.

    • jedibob5@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      5 days ago

      should only be used for art

      No, churning out uncanny valley slop built on mass IP theft ain’t it, either. Personally I think AI is best used for simulations and statistical models of engineering problems, where it can iteratively find optimized solutions faster and sometimes more accurately than humans. The focus on “generative AI” and LLMs trying to get computers to act like humans is incredibly pointless, IMO. Let computers do what computers are good at, and humans do what humans are good at.

      • groet@feddit.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        5 days ago

        LLMs are an incredible interface to other systems. Why learn a new system language to get information’s when you can use natural language to ask and the AI will translate into the system language and do the lookup for you and then translate the result into natural language again. The important part is, the AI never gives an answer to your question itself, it just translates between human language and system language.

        Use it as a language machine and never as a knowledge machine!

        • jedibob5@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          AI’s tendency to hallucinate means that for it to be actually reliable, a human needs to double-check all of its output. If it is being used to acquire and convey information of any kind to the prompter, you might as well just skip the AI and find the information manually, as you’d have to do that anyway to validate what it told you.

          And AI hallucinations are a side effect of the fundamental way in which generative AI works - they will never be 100% accounted for. When an AI generates text, it is simply predicting what word is likely to come next based on its prompt in relation to its training data. While this predictive ability has become remarkably sophisticated within the last few years (more than I thought it ever would, tbh), it is still only a predictive text generator. It’s not “translating,” “understanding,” or “comprehending” anything about whatever subject it has been asked about - it is merely predicting the likelihood of the next word in its response based on its training data.