“I’ve been saving for months to get the Corsair Dominator 64GB CL30 kit,” one beleagured PC builder wrote on Reddit. “It was about $280 when I looked,” said u/RaidriarT, “Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?”

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    I just got a 2x64GB 6000 kit before its price skyrocketed by like $130. I saw other kits going up, but had no clue I timed it so well.

    …Also, why does “AI” need so much CPU RAM?

    In actual server deployments, pretty much all inference work is done in VRAM (read: HBM/GDDR); they could get by with almost no system RAM. And honestly most businesses are too dumb to train anything that extensively. ASICs that would use, say, LPDDR are super rare, and stuff like Hybrid/IGP inference is the realm of a few random folks with homelabs… Like me.

    I think ‘AI’ might be an overly broad term for general server buildout.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      why does “AI” need so much CPU RAM

      It doesn’t really, though CPU inference is possible/slow at 256+gb. The problem is that they are making HBM (AI) ram instead of ddr4/5.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I suspect RAM may become increasingly useful with the shift from pure chat LLM to connected agents, MCP, and catching results and data for scaling things like public Internet search and services.

      When I think of database system server software, a lot of performance gains are from keeping used data in RAM. With the expanding of LLM systems and it’s concerns, backing data, connective ness, and need for optimisation, a shift to caching and keeping in RAM seems to suggest itself. It’s already wasteful/big and operates on a lot of data, so it seems plausible that would not be a small cache.

    • tty5@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Same memory production capacity can be allocated to ddr5 or to hbm and openai signed contracts with sk hynix and samsung, the two largest ram manufacturers in the world, and bought a significant percentage of next year’s production.

      DDR5 prices started spiking as that deals impact propagated through the supply chain. I bought a 2x32 6800 Cl30 kit for 195 euro 12 days ago. It was 330 euro 4 days later.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        …Is it that interchangeable?

        TBH I know little of memory fabs and HBM ICs, but I know (say) TSMC can’t just switch from a power-optimized process to a high frequency one at the drop of a hat.

        • tty5@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Slightly different part, same process. The bigger bottleneck is packaging - HBM is 3d stacked.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            Ah. Yeah. And its on the fab to do that.

            I always though it’d be cool for CPUs to switch to packaged RAM, too. Samsung apparently tried to do it with Wide I/O for mobile ARM stuff, but it never caught on.

            • Frezik@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 hours ago

              If I’m following what you mean by packaged RAM, Apple does that. It’s fast, but you can’t upgrade it.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                22 hours ago

                That’s (as I understand it) a misconception.

                Apple attaches their laptop RAM the same way all smartphones do. It’s a wide bus with LPDDR, which makes it an unusual configuration amongst laptops, but it’s technically conventional. And relatively cheap.

                AMD’s Strix Halo chips are the same. Apple could use LPCAMM to make the memory upgradable if they wanted, they just… don’t.

                When we talk ‘packaging’, we’re talking putting chips on advanced substrates with denser wires than one could possibly get on a motherboard (or a ‘mini’ motherboard which is kinda what Apple/smartphone RAM is packaged on), stuff silicon fabs have to do:

                https://www.tsmc.com/english/dedicatedFoundry/services/advanced-packaging

                And HBM falls into this bucket. The way its hooked up to the processor is physically different than PC RAM sticks, or Apple’s RAM. This is mostly not done on consumer stuff because its very expensive, and most of TSMC’s advanced packaging production capacity is reserved for server stuff.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        They can ALL be run on RAM, theoretically. I bought 128GB so I can run GLM 4.5 with the experts offloaded to CPU, with a custom trellis/K quant mix; but this is a ‘personal use’ tinkerer setup basically no one but hobbyists will touch.

        Qwen Next is good at that because its very low active parameter.

        …But they aren’t actually deployed that way. They’re basically always deployed on cloud GPU boxes that serve dozens/hundreds of people at once, in parallel.

        AFAIK the only major model actually developed for CPU inference is one of the esoteric Gemma releases, aimed at mobile. And the bitnet experiments, which aren’t very big so far.

        (In case it’s not obvious, this is my special interest, and I’m happy to ramble on about how to set up ‘niche gaming rig hybrid models’ for anyone interested).