This article needs additional citations for verification. (April 2023) |
Computer architecture bit widths |
---|
Bit |
Application |
Binary floating-point precision |
Decimal floating-point precision |
In computer architecture, 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide. Also, 128-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
General home computing and gaming utility emerged at 8-bit word sizes, as 28=256 words, a natural unit of data, became possible. Early 8-bit CPUs (such as the Zilog Z80 and MOS Technology 6502, used in the in the 1977 PET, TRS-80, and Apple II) inaugurated the era of personal computing. Many 16-bit CPUs already existed in the mid-1970s. Over the next 30 years, the shift to 16-bit, 32-bit and 64-bit computing allowed, respectively, 216 = 65,536 unique words, 232 = 4,294,967,296 unique words and 264 = 18,446,744,073,709,551,616 unique words, each step offering a meaningful advantage until 64 bits was reached. Further advantages evaporate from 64-bit to 128-bit computing as the number of possible values in a register increases from roughly 18 quintillion (1.8×1019) to 340 undecillion (3.4×1038) as so many unique values are never utilized. Thus, with a register that can store 2128 values, no advantages over 64-bit computing accrue to either home computing or gaming. CPUs with a larger word size also require more circuitry, are physically larger, require more power and generate more heat. Thus, there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, although a number of processors do have specialized ways to operate on 128-bit chunks of data, and are given in § History.