25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle







  • It’s perfectly fine to like something that isn’t art. Hell, it’s perfectly fine to have a definition of art that can include AI, that’s just a framing for talking about the things AI does well vs. the things it doesn’t. I find that where a human can mix different things together in a way that enriches the whole, AI mixes things together in contradictory ways because it lacks human experience. It’s why, AI pictures usually come out flat and lifeless or includes nonsense details that don’t fit or includes requested details in incongruous ways.

    That said, I only know about Hatsuni Miku through my kids. I don’t really know anything about that specifically.



  • They aren’t “self-aware” at all. These thinking models spend a lot of tokens coming up with chains of reasoning. They focus on the reasoning first, and their reasoning primes the context.

    Like if I asked you to compute the area of a rectangle you might first say to yourself: “okay. There’s a formula for that. LxW. This rectangle is 4 by 5, so the calculation is 4x5, which is 20.” They use tokens to delineate the “thinking” from their response and only give you the response, but most will also show the thinking if you want.

    In contrast, if you ask an AI how it arrived at an answer after it gives it, it needs to either have the thinking in context or it is 100% bullshitting you. The reason injecting a thought affects the output is because that injected thought goes into the context. It’s like if you’re trying to count cash and I shout numbers at you, you might keep your focus on the task or the numbers might throw off your response.

    Literally all LLMs do is predict tokens, but we’ve gotten pretty good at finding more clever ways to do it. Most of the advancements in capabilities have been very predictable. I had a crude google augmented context before ChatGPT released browsing capabilities, for instance. Tool use is just low randomness, high confidence, model that the wrapper uses to generate shell commands that it then runs. That’s why you can ask it to do a task 100 times and it’ll execute 99 times correctly and then fail—got a bad generation.

    My point is we are finding very smart ways of using this token prediction, but in the end that’s all it is. And something many researchers shockingly fail to grasp is that by putting anything into context—even a question—you are biasing the output. It simply predicts how it should respond to the question based on what is in its context. That is not at all the same thing as answering a question based on introspection or self-awareness. And that’s obviously the case because their technique only “succeeds” 20% of the time.

    I’m not a researcher. But I keep coming across research like this and it’s a little disconcerting that the people inventing this shit sometimes understand less about it than I do. Don’t get me wrong, I know they have way smarter people than me, but anyone who just asks LLMs questions and calls themselves a researcher is fucking kidding.

    I use AI all the time. I think it’s a great tool and I’m investing a lot of my own time into developing tools for my own use. But it’s a bullshit machine that just happens to spit out useful bullshit, and people are desperate for it to have a deeper meaning. It… doesn’t.