• wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    143
    ·
    edit-2
    6 months ago

    We do, depending on how you count it.

    There’s two major widths in a processor. The data register width and the address bus width, but even that is not the whole story. If you go back to a processor like the 68000, the classic 16-bit processor, it has:

    • 32-bit data registers
    • 16- bit ALU
    • 16-bit data bus
    • 32-bit address registers
    • 24-bit address bus

    Some people called it a 16/32 bit processor, but really it was the 16-bit ALU that classified it as 16-bits.

    If you look at a Zen 4 core it has:

    • 64-bit data registers
    • 512-bit AVX data registers
    • 6 x 64-bit integer ALUs
    • 4 x 256-bit AVX ALUs
    • 2 x 128-bit data bus to DDR5 (dual edge 64-bit)
    • ~40-bits of addressable physical RAM

    So, what do you want to call this processor?

    64-bit (integer width), 128-bit (physical data bus width), 256-bit (widest ALU) or 512-bit (widest register width)? Do you want to multiply those numbers up by the number of ALUs in a core? …by the number of cores on a piece of silicon?

    Me, I’d say Zen4 was a 256-bit core, but you could argue any of the above numbers.

    Basically, it’s a measurement that lost all meaning so people stopped using it.

    • LeFantome@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      6 months ago

      I would say that you make a decent argument that the ALU has the strongest claim to the “bitness” of a CPU. In that way, we are already beyond 64 bit.

      For me though, what really defines a CPU is the software that runs natively. The Zen4 runs software written for the AMD64 family of processors. That is, it runs 64 bit software. This software will not run on the “32 bit” x86 processors that came before it ( like the K5, K6, and original Athlon ). If AMD released the AMD128 instruction set, it would not run on the Zen4 even though it may technically be enough hardware to do so.

      The Motorola 68000 only had a 16 but ALU but was able to run the same 32 bit software that ran in later Motorola processors that were truly 32 bit. Software written for the 68000 was essentially still native on processors sold as late as 2014 ( 35 years after the 68000 was released ). This was not some kid of compatibility mode, these processors were still using the same 32 bit ISA.

      The Linux kernel that runs on the Zen4 will also run on 64 bit machines made 20 years ago as they also support the amd64 / x86-64 ISA.

      Where the article is correct is that there does not seem to be much push to move on from 64 bit software. The Zen4 supports instructions to perform higher-bit operations but they are optional. Most applications do not rely on them, including the operating system. For the most part, the Zen4 runs the same software as the Opteron ( released in 2003 ). The same pre-compiled Linux distro will run on both.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      6 months ago

      At less than a tenth the size, this is actually a better explanation than the article. Already correcting the fact that we do at the very beginning.
      If you absolutely had to put a bit width on the Zen 4, the 2x128 bit data bus is probably the best single measure totaling 256 bit IMO.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Even then, at what point do you measure it? DDR interface is likely very much narrower than the interfaces between cache levels. Where does the core end and the memory begin?

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          Yes you are 100% right, and I did consider level 3 cache as a better measure, because that allows communication between cores without the need to go through RAM, and cache generally has a high hit rate. But this number was surprisingly difficult to find, so I settled on the data bus.
          Anyways it would be absolutely fair to call it 256bit by more than one measure. But for sure it isn’t just 64 bit, because it has 512 bit instructions, so the instruction set isn’t limited to 64 bit. Even if someone was stubborn enough to claim the general instruction set is 64 bit, it has the ability to decode and execute 2 simultaneous 64 bit instructions per core, making at least 128 bit by any measure.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      6 months ago

      I gave up trying to figure out what the “bitness” of CPUs were around the time the Atari Jaguar came out and people described it as 64 bit because it had 32 bit graphics chip plus a 32 bit sound chip.

      It’s been mostly marketing bollocks since forever.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        I expect the engineers are telling the marketing people “No! You can’t do that. You’ll scare everyone that it’s incompatible.”

        • Vilian@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          32bits is compatible with 64bits, why wouldn’t 128 bits be too?

          • Peffse@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            64bit cut out 16bit compatibility. So I’m guessing the fear is that 128 would cut 32.

    • ulterno@lemmy.kde.social
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      6 months ago

      I see it as the number of possible instructions.

      As in, 8 bit 8085 had 28 possible instructions, 32 bit ones had 232 and already had enough possible combinations that we couldn’t come up with enough functions to fill the provided space.

      CC BY-NC-SA

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        So “instruction encoding length”.

        I don’t think that works though. For something like RISC-V, RV64 has a maximum 32-bit instruction encoding. For x86-64 those original 8-bit intructions still exist, and take up a huge part of the encoding space, cutting the number of n-bit instructions to more like 2^(n-7)

        • ulterno@lemmy.kde.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          RV64 has a maximum 32-bit instruction encoding

          I kinda expected that to happen, since there’s already enough to fit all required functions. So yeah, even this is not a good enough criteria for bit rating.

          those original 8-bit intructions still exist, and take up a huge part of the encoding space, cutting the number of n-bit instructions to more like 2^(n-7)

          err… they are still instructions, right? And they are implemented. I don’t see why you would negate that from the number of instructions.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            If the 8088 had used all but one 256 8-bit values as legal instructions, all your new instructions after that point would need to start with that unused value and then you can add a maximum of 256 instructions by using the next byte. End result is 511 instructions can be encoded in 16-bits.

            • ulterno@lemmy.kde.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              Ah right! I forgot about that.

              So you either have to pad all instructions in all previous binaries, or reduce the amount of available instructions in the arch update.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    162
    arrow-down
    33
    ·
    6 months ago

    Is this a question?

    We haven’t even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it’s technically the opposite.

    • jwr1@kbin.earthOP
      link
      fedilink
      arrow-up
      96
      arrow-down
      3
      ·
      6 months ago

      It’s a link to an article I found interesting. It basically details why we’re still using 64-bit CPUs, just as you mentioned.

    • Technus@lemmy.zip
      link
      fedilink
      English
      arrow-up
      67
      ·
      6 months ago

      We don’t even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      35
      ·
      6 months ago

      Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It’s harder to do half. Sure, it’d be slightly faster for some things, but it’s not significant.

        • henfredemars@infosec.pub
          link
          fedilink
          English
          arrow-up
          21
          ·
          6 months ago

          And we have wide instructions that can process this data, such as for multimedia applications.

          Addressing and memory size has been the historic motivator for wider registers, but it’s probably not going to be in my lifetime that I see the need for 128.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        There’s plenty of instructions for processing integers and fp numbers from 8 bits to 512 bits with a single instruction and register. There’s been a lot of work in packed math instructions for neural network inference.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      3
      ·
      6 months ago

      Is this a question?

      For the people who don’t know the answer? Yes.

      Not everything you see is intended for your consumption. Let people enjoy learning things.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        6 months ago

        I totally agree. I know a teacher who who likes to say:

        “I believe there really is no such thing as a dumb question. As long as it’s an honest question (not rhetorical or sarcastic), then it’s a genuine request for more information. So even if it’s coming from a place of extreme ignorance, asking a question is an attempt to learn something, and the effort should be applauded.”

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      6 months ago

      Is this a question?

      Woah, meta.

      Yes, it is.

      This is not a question, though.

  • hades@lemm.ee
    link
    fedilink
    English
    arrow-up
    122
    arrow-down
    8
    ·
    6 months ago

    We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don’t we drive eight-wheel vehicles?

  • ArbiterXero@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    16
    ·
    6 months ago

    32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

    • aard@kyu.de
      link
      fedilink
      English
      arrow-up
      43
      ·
      6 months ago

      You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

      • ArbiterXero@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        6 months ago

        Yeah I acknowledged the shortcomings in a different comment.

        It was a duct take solution for sure.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Your other posts didn’t reply to your claim that it is a Windows only problem. Linux did and some distros (Raspberry Pi) have the same limitations as Windows 95.

          32 bit Windows XP got PAE in 2001, two years after Linux. 64 bit Windows came out in 2005.

          • ArbiterXero@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            6 months ago

            I’m not overly worried about a few random Linux distros that did strange things, nor raspberry pi’s. I mean I don’t know why you’d use 32 bit on an 8gb pi anyways, so it shouldn’t affect anyone unless they did something REALLY strange.

            For the average user, neither of those scenarios mattered, especially back when the problem was at its peak.

            2 years was a long time to wait to use the extra memory that Linux could use out of the box.

            I honestly don’t even remember XP having PAE, but if you NEED the validation, sure, Microsoft EVENTUALLY got it.

            Except that Microsoft removed it in SP2 LOL!

            And all the home use versions of XP still maxed out at 4gb.

            There could see the memory but couldn’t use it, oh I’d forgotten that!

            Wikipedia was a fun read.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 months ago

              2 years was a long time to wait to use the extra memory that Linux could use out of the box.

              For 8 years, Linux had the same limitations as Windows. Then for 2 years it was ahead. Pae could always be turned back on with a boot switch. Going back 25 years to criticize Windows is kind of weird but you do you.

              (I run Linux on a variety of PCs, SBC’s, and VM’s in my house. I just get annoyed by unjustified Linux fanboyism.)

              • ArbiterXero@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                Not just for 2 years, XP removed it in sp2.

                And even when it supported it, many versions wouldn’t let you use it, or would let you “see” it but not use it.

                For basically the life of XP.

                • Blue_Morpho@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  And as I said, it could still be enabled with a boot switch.

                  It’s not like all distros in 1999 had PAE enabled by default. You had to find a pae enabled kernel.

                  And Linux PAE has been buggy off and on for 20 years:

                  "It worked for a while, but the problem came back in 2022. "

                  https://flaterco.com/kb/PAE_slowdown.html

    • Amanda@aggregatet.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

        • AnyOldName3@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          It’s a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you’re compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.

          • wizardbeard@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            You can also toggle it on precompiled binaries with the right tool (or a hex editor if you’re insane), which was my main use case. Lots of old games that never got 64-bit releases that benefit from having access to the extra RAM, especially if you’re modding them. It’s a great way to avoid out of memory crashes.

      • ArbiterXero@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        6 months ago

        Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

        Also the entire article comes down to simple math.

        Bits is the number of digits.

        So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

        So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

        10^4 is WAY smaller than 10^8

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        2
        ·
        edit-2
        6 months ago

        It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It’s more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

        A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

        https://en.m.wikipedia.org/wiki/3_GB_barrier

        • Amanda@aggregatet.org
          link
          fedilink
          English
          arrow-up
          12
          ·
          6 months ago

          Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

          • ms.lane@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            6 months ago

            Only on consumer Windows.

            Windows Server never had the problem. But wouldn’t allow Creative Labs drivers to be installed either…

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      6 months ago

      I’m not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.

  • Amanda@aggregatet.org
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    6 months ago

    The comments on this one really surprised me. I thought the kinds of people who hang out on XDA-developers were developers. I assumed that developers had a much better understanding of computer architecture than the people commenting (who of course may not be representative of all readers).

    I also get the idea that the writer is being vague not to simplify but because they genuinely don’t know the details, which feels even worse.

    • sandalbucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      6 months ago

      I think it’s a D-tier article. I wouldn’t be surprised if it was half gpt. It could have been summarized in a single paragraph, but was clearly being drawn out to make screen real-estate for the ads.

      • SaltySalamander@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        The majority of articles I come across are exactly like this, needlessly drawing everything out to maximize word count and, thus, maximize ad space.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    6 months ago

    Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven’t even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

    • tunetardis@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      I know a google engineer who was saying they’re having to update their code bases to handle > 16 exabytes of storage, if you can imagine. But yeah, that’s storage, not RAM.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      6 months ago

      Tons of computing is done on x86 these days with 256 bit numbers, and even 512-bit numbers.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 months ago

        Being pedantic, but…

        The amd64 ISA doesn’t have native 256-bit integer operations, let alone 512-bit. Those numbers you mention are for SIMD instructions, which is just 8x 32-bit integer operations running at the same time.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          The ISA does include sse2 though which is 128 bit, already more than the pointer width. They also doubled the number of xmm registers compared to 32-bit sse2.

          Back in the days using those instructions often gained you nothing as the CPUs didn’t come with enough APUs to actually do operations on the whole vector in parallel.

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Ah fair enough, I figured that since the registers are 512 bit, that they’d support 512 bit math.

          It does look like you can load/store and do binary operations on 512-bit numbers, at least.

          Not much difference between 8x64 and 512 when it comes to integer math, anyways. Add and subtract are completely identical.

      • tunetardis@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 months ago

        You can always combine integer operations in smaller chunks to simulate something that’s too big to fit in a register. Python even does this transparently for you, so your integers can be as big as you want.

        The fundamental problem that led to requiring 64-bit was when we needed to start addressing more than 4 GB of RAM. It’s kind of similar to the problem of the Internet, where 4 billion unique IP addresses falls rather short of what we need. IPv6 has a host of improvements, but the massively improved address space is what gets talked about the most since that’s what is desperately needed.

        Going back to RAM though, it’s sort of interesting that at the lowest levels of accessing memory, it is done in chunks that are larger than 8 bits, and that’s been the case for a long time now. CPUs have to provide the illusion that an 8-bit byte is the smallest addressible unit of memory since software would break badly were this not the case, but it’s somewhat amusing to me that we still shouldn’t really need more than 32 bits to address RAM at the lowest levels even with the 16 GB I have in my laptop right now. I’ve worked with 32-bit microcontrollers where the byte size is > 8 bits, and yeah, you can have plenty of addressible memory in there if you wanted.

      • ms.lane@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 months ago

        We are.

        Addressing-wise, no we don’t have consumer level 128bit CPUs and probably won’t ever need them.

        Instructions though, SSE had some 128bit ops (OR/XOR, MOVE) and AVX is 128bit vector math. AVX2 is 256bit vector math, AVX512 is- you guessed it 512bit vector math. AltiVec on PPC had 128bit vectors 20 years ago.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      ·
      6 months ago

      Quantum computers won’t displace traditional computers. There’s certain niche use-cases for which quantum computers can become wildly faster in the future. But for most calculations we do today, they’re just unreliable. So, they’ll mostly coexist.

      • UraniumBlazer@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        In other words like GPUs. GPUs suck ass at complex calculations. They however, work great for a large number of easy calculations, which is what is needed for graphics processing.

      • Amanda@aggregatet.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 months ago

        Presumably you’d have a QPU in your regular computer, like with other accelerators for graphics etc, or possibly a tiny one for cryptography integrated in the CPU

        • Tinidril@midwest.social
          link
          fedilink
          English
          arrow-up
          10
          ·
          6 months ago

          There would have to be some kind of currently unforseen breakthroughs before something like that would be even remotely possible. In all likelihood, quantum computing would stay in specialized data centers. For the problems quantum would solve, there is really no advantage to having it local anyways.

          • Amanda@aggregatet.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            I assume we need a lot of breakthroughs to even have useful quantum computing at all, but sure.

            Isn’t quantum encryption interesting for end users?

            • hades@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              6 months ago

              Quantum encryption isn’t something quantum computers can even do. It’s not just transforming bits into other bits, it’s about building entirely new security properties based on physical properties of matter.

              So, even if it is interesting for end users, they would need dedicated hardware anyway.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    6 months ago

    Okay, so why can’t we just not use exponentially growing values? Like 96 bit (64 + 36). I’d the something intrinsic about the size increases that they HAVE to be exponential? Why not linear scaling? 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, etc.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      We can, but it’s awkward to do so. By having everything work with powers of 2 you don’t need to have everything the same size, but can still pack things in memory efficiently.

      If your registers were 48bits long, you can use it to store 6 bytes, or 3 short ints, but only one int with 16-bits going unused. If they are powers of two in size, you can always fit smaller things in them with no wasted space.

      • asmoranomar@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        A better example is to explain the chaos of having to go to the grocery store and pick up some hot dogs and buns. You know the pain.

    • SorteKanin@feddit.dk
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 months ago

      Because CPU registers are all powers of 2, i.e. exponential in this fashion. And it’s also just the same reason - 64 is high enough, why go to 96 or 80 or something?

    • friend_of_satan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      In binary, when you add one more numeric place, things double. Not doubling would be like having two digit decimal numbers but only allowing people to count to 50.

    • addie@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      If you made memory access lines twice as wide, they’d take up more space. More space means (a) chips run slower, because it takes time for the electricity to get there (b) they’d be bigger and more expensive.

      The main problem with 32-bit, as others have noticed, is that that’s not really so much RAM. CPUs do addition and subtraction the way we were taught at school - ‘carry the one’, they’ve an overflow bit that’s set when your sum doesn’t fit in the columns. On 8-bit CPUs, we were always checking back when adding up large numbers. On 64-bit CPUs, we can deal with truly massive numbers anyway, it’s not such a hassle. And they’re so fast at doing sums anyway and usually waiting for memory, it’s barely a hassle.

      Moving to 128-bit would give us a truly minuscule, probably unmeasurable, benefit in exchange for significant downsides. We could make them, but it would be pointless.