Exactly. Nothing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.
Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.
There’s no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we’re feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.
They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.
And them being sold as AI doesn’t make them any closer, it just means the people and companies selling them are scammers.
Exactly. Nothing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.
Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.
There’s no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we’re feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.
They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.
And them being sold as AI doesn’t make them any closer, it just means the people and companies selling them are scammers.