Java and C# – Why 32-bit Integers and 64-bit Floating Points Are Common

cjavalanguage-design

Coming from a Java and C# background, I've learned to use int (32 bits) whenever I need a whole number, and double (64 bits) when dealing with fractional values. Most methods from their respective frameworks (JVM and .NET) usually expect these two types.

My question is, why don't we use both long and double for consistency? I know that having 64 bits of precision in integers is not needed most of the time, but then again, we don't usually need 64 bits of precision in floating point numbers, or do we?

What is the reasoning behind this, if any?

Best Answer

Range vs. Precision

One thing is that I'd contest the idea that the most common floating-point number uses a 64-bit DPFP (double-precision floating-point) representation.

At least in performance-critical real-time fields like games, SPFP (single-precision floating-point) is still far more common, since approximation and speed there is preferable to utmost accuracy.

Yet perhaps one way to look at this is that a 32-bit int represents a range of 2^32 integers (~4.3 billion). The most common use of integers is probably going to be as indices to elements, and that's a pretty healthy range of elements that would be difficult to exceed without exceeding the memory available with today's hardware *.

* Note that out of memory errors can occur when allocating/accessing a single, contiguous 4 gigabyte block even with 30 gigabytes free, e.g., due to the contiguity requirements of that block.

A 32-bit integer isn't always more efficient at the instruction level, but it tends to generally be more efficient when aggregated into an array, e.g., as it requires half the memory (more indices that can fit into a single page/cache line, e.g.).

Also note that as Lightness Races in Orbit points out, it's not necessarily even true from a broad perspective that 32-bit integers are more commonly-used. I have my narrow perspective coming from a field where 32-bit ints are often aggregated by the hundreds of thousands to millions as indices into another structure -- there the halving in size can help a lot.

Now 64-bit DPFP might be used a whole lot more than 64-bit integers in some contexts. There the extra bits are adding precision rather than range. A lot of applications can demand precision, or at least have a much easier time programming with extra precision available. So that's probably why 64-bit DPFPs might be more common than 64-bit integers in some areas, and why int might still be 32-bits in many scenarios even on 64-bit platforms.

Related Topic