

2·
3 hours agoAn LLM will only know what it knows.
AGI will be able to come up with novel information or work though things it’s never been trained on.


An LLM will only know what it knows.
AGI will be able to come up with novel information or work though things it’s never been trained on.


The poster motto should be.
IT’S. NEVER. LUPUS.


That’s how Gmail worked. Things never got released, once a username is taken I’m it’s gone.


Russian Owned News: “NATO bad!”
You’re spot on for all of that. Context windows have a lot to do with the unhinged behavior right now… But it’s a fundamental trait of how LLMs work.
For example, you can tell it to refer to you by a specific name and once it stops you know the context window is overrun and it’ll go off the rails soon… The newer chat bots have mitigations in place but it still happens a lot.
These are non-deterministic predictive text generators.
Any semblance of novel thoughts is due to two things for modern LLMs:
Model “temperature”: a setting that determines how much “randomness” there is… with a value of 0 it will generate exactly what it can find that exactly follows what you gave it the best it can. Note it often breaks when you try this.
It has more information than you: I’ve had interesting interactions with work where it came up with actually good ideas. These are all accounted for by MCPs allowing it to search and piece things together or the post training refinements and catalog augmentation though.