• Hammock_tann@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    9
    ·
    4 days ago

    Technically, LLMs aren’t ai. What they do is basically predict relationship between words. They can’t reason or count or learn.

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      4 days ago

      Exactly. Nothing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.

      Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.

      There’s no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we’re feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.

      They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.

      And them being sold as AI doesn’t make them any closer, it just means the people and companies selling them are scammers.

    • survirtual@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      4 days ago

      “Technically”? Wrong word. By all technical measures, they are technically 100% AI.

      What you might be trying to say is they aren’t AGI (artificial general intelligence). I would argue they might just be AGI. For instance, they can reason about what they are better than you can, while also being able to draw a pelican riding a unicycle.

      What they certainly aren’t is ASI (artificial super-intelligence). You can say they technically aren’t ASI and you would be correct. ASI would be capable of improving itself faster than a human would be capable.

      • survirtual@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Careful, my other comment got removed because of a witty but still insightful dig.

        They are very sensitive here about how the AI isn’t really AI.