SQL – Why Numeric Precision is Specified by Number of Digits

numeric precisionsql

I understand that this might be a loaded question.
For example, in Sql Server 2008, doing DECLARE @someNum NUMERIC(10); means that the range of the variable will be limited by 10 digits. In other languages such as C, Java, etc. the standards for representing whole and fractional numbers are different. The range of the values is being sliced by multiples of bytes, and this is understandable, due to the way hardware works. A language could provide something that most hardware does not yet support, such as a 256-bit integer type, which will work fast on a 64-bit computer, if it is implemented correctly.

The numeric types in the databases seem to be not tied to underlying hardware. I understand that the values might need to be null-ablle, which can cost a bit or two, but why such arbitrary precision that does not seem to optimize storage or calculation speed?

Perhaps databases do not need to perform numeric computations often. Perhaps digit width of a column is a natural extension of character width. Perhaps SQL was supposed to be a non-programmers language (because it sounds like English 🙂 )?
Perhaps history took a weird turn, such as one that allowed the inferior VHS format to prosper.

Please help me understand this.

Best Answer

doing DECLARE @someNum NUMERIC(10); means that the range of the variable will be limited by 10 digits.

Actually, you're either very wrong in that assertion, or SQL server is hugely incompatible with other DB engines. Typically, you have:

  • NUMERIC(precision, scale)
  • NUMERIC(precision)

In the second case, scale is zero, i.e. you can only handle integers; precision is the maximum total number of digits with the specified scale (meaning integers in the second case).

By contrast, you seem to be reading numeric(10) as meaning "any number as long as there are 10 digits or less", for which there is no (best I'm aware anyway) SQL construct short of single/double float types (whose name differ per DB engine).


The numeric types in the databases seem to be not tied to underlying hardware.

Indeed, they're not. Numeric/Decimal is an arbitrary precision number, stored very much like a varchar of huge length (1000+ digits). You can actually use it to store 1e100 + 1 as an exact value. You'll get 1 followed by zeros, and finally a 1.

You sound like you need a float... The latter would store 1e100 when given 1e100 + 1, and would yield 1e100.