Pages

Tuesday 5 May 2015

64 Bit



In computing, 64 bit is an adjective used to indicate that a given architecture in the standard format of a simple variable (integer, pointer, handle etc.) Is 64 bits long. This generally reflects the size of the internal registers of the CPU used for that architecture.

The term "64-bit" may be used to describe the size of:

A unit of data

The internal registers of a CPU or the ALU that has to work using those records.
Memory addresses
Transferred data for each read or write to main memory

32 against 64-bit

The transition from 32-bit architecture to a 64 involves a profound change, since most of the operating systems must be heavily modified to take advantage of the new architecture. The other programs must first be "brought" to take advantage of the new features; the old programs are usually supported by a hardware compatibility mode (that is, where the processor also supports the old instruction set to 32-bit), through software emulation, or through the implementation of the core of a 32-bit processor to 'interior of the processor chip itself (as the Itanium processors from Intel, which include a core x86).

A significant exception is the AS / 400, whose software runs on an ISA (Instruction Set Architecture) virtual call TIMI (Technology Independent Machine Interface) which is translated, by a layer of low-level software, native machine code before execution. This layer is all you need to rewrite to bring the entire operating system and all programs on a new platform, as when IBM migrated from the old line processors "IMPI" to 32/48 bit to 64-bit PowerPC (IMPI had nothing to do with the PowerPC 32 bits, then it was a more challenging transition of the passage from a set of 32-bit instructions to 64-bit version of the same). Another significant exception is the z / Architecture of IBM that runs smoothly applications with different types of addressing (24, 32 and 64 bit) simultaneously.

Although the 64-bit architectures indisputably make it easier to work with massive amounts of data such as digital video, scientific drawing, and in large database, there have been several discussions about what they or their 32-bit mode are more compatible fast, in other types of work, compared to 32-bit systems of similar price.

Theoretically, some programs may be faster in 32-bit mode. On some architectures the 64-bit instructions take away more space than 32, so it is possible that certain 32-bit programs can enter the fast cache memory of the CPU where the 64 there succeed. In other words, use 64 bits to perform operations that could be managed at 32, an unnecessary waste of resources (memory, cache, etc.). However, in applications such as scientific, the data processed in a natural manner often use 64-bit blocks, and will therefore be faster on 64-bit architecture because the CPU is designed to work directly with these dimensions rather than forcing programs to perform multiple steps to accomplish the same thing.

These assessments are complicated by the fact that when defining new architectures, designers instruction set have taken the opportunity to make changes appear to fill gaps of the old one, adding new features designed to improve performance (such as, for example, the additional logs in the AMD64 architecture).