LLMs are an incredible interface to other systems. Why learn a new system language to get information’s when you can use natural language to ask and the AI will translate into the system language and do the lookup for you and then translate the result into natural language again. The important part is, the AI never gives an answer to your question itself, it just translates between human language and system language.
Use it as a language machine and never as a knowledge machine!
AI’s tendency to hallucinate means that for it to be actually reliable, a human needs to double-check all of its output. If it is being used to acquire and convey information of any kind to the prompter, you might as well just skip the AI and find the information manually, as you’d have to do that anyway to validate what it told you.
And AI hallucinations are a side effect of the fundamental way in which generative AI works - they will never be 100% accounted for. When an AI generates text, it is simply predicting what word is likely to come next based on its prompt in relation to its training data. While this predictive ability has become remarkably sophisticated within the last few years (more than I thought it ever would, tbh), it is still only a predictive text generator. It’s not “translating,” “understanding,” or “comprehending” anything about whatever subject it has been asked about - it is merely predicting the likelihood of the next word in its response based on its training data.
LLMs are an incredible interface to other systems. Why learn a new system language to get information’s when you can use natural language to ask and the AI will translate into the system language and do the lookup for you and then translate the result into natural language again. The important part is, the AI never gives an answer to your question itself, it just translates between human language and system language.
Use it as a language machine and never as a knowledge machine!
deleted by creator
AI’s tendency to hallucinate means that for it to be actually reliable, a human needs to double-check all of its output. If it is being used to acquire and convey information of any kind to the prompter, you might as well just skip the AI and find the information manually, as you’d have to do that anyway to validate what it told you.
And AI hallucinations are a side effect of the fundamental way in which generative AI works - they will never be 100% accounted for. When an AI generates text, it is simply predicting what word is likely to come next based on its prompt in relation to its training data. While this predictive ability has become remarkably sophisticated within the last few years (more than I thought it ever would, tbh), it is still only a predictive text generator. It’s not “translating,” “understanding,” or “comprehending” anything about whatever subject it has been asked about - it is merely predicting the likelihood of the next word in its response based on its training data.