Hello, I’m trying to understand what has slowed down the progress of CPUs. Is it a strategic/political choice, or a real technical limitation?

  • Benjamin@jlai.luOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Quite right, the argument seemed coherent to me until I observed the new performance of recent GPUs; it seems that the limits no longer exist.

    I think it’s legitimate to ask the question: My hypothesis is that the industry is trying to restrict the computing power of consumer machines(for military defence interests?), but the very large market for video games and 3D for video games, on the contrary, is constantly demanding more computing power, and machine manufacturers are obliged to keep up with this demand.

    What confuses me, I think, is that I read a serious technical article 15 years ago that talked about a 70 Ghz CPU core prototype.

    • BartyDeCanter@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      First, GPUs and CPUs and very different beasts. GPU workloads are by definition highly parallelized and so a GPU has an architecture where it is much easier to just throw more cores at the problem, even if you don’t make each core faster. This means that the power issue is less. Take a look at GPU clock rates vs CPU clocks.

      CPU workload tends to have much, much less parallelism and so there is less and less return on adding more cores.

      Second, GPUs have started to have lower year over year lift. Your chart is almost a decade out of date. Take a look at, say, a 2080 vs 3080 vs 4080 and you’ll see that the overall compute lift is shrinking.

    • Placid@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      GPU isn’t limited by die size and system architecture like CPU. GPU chiplets do simple calculations too, so it’s almost as simple as putting more on a board, which can be as large as the manufacturer desires.

      Read about the differences from this Nvidia blog and you’ll see that they’re wildly different. It’ll make sense why they’re in different charts in the source you provided originally.