I have a some doubles that I want to port into NSDecimalNumber, because I'm getting too bad floating-point arithmetic errors with them.
They are:
double one = 0.0000001;
double two = 1000000000.0000001;
They are very problematic, but I hope that NSDecimalNumber can help out to get calculations right. I'm not that big math genius, so I wonder how to provide the correct input to this method of NSDecimalNumber.
Let me try:
NSDecimalNumber *one = [NSDecimalNumber decimalNumberWithMantissa:1 exponent:-7 isNegative:NO];
NSDecimalNumber *two = [NSDecimalNumber decimalNumberWithMantissa:10000000000000001 exponent:-7 isNegative:NO];
I feel that's wrong. I could only guess. The documentation does not provide much information on that. But as far as I get it, the "mantissa" is an integer and the exponent tells where the floating point should be, by adding zeros. Probably this is also wrong 😉
I have seen some code snippets where people just feeded a CGFloat as mantissa and provided a 0 as exponent, but I can only guess what their intention was, so I can't just do it the same way without understanding it.
Any idea?
Best Answer
As of the reference of the class, numbers are represented as
mantissa x 10^exponent
.So, you are pretty right with your assumptions.
Your number is first of all the
mantissa
. The decimal point is after your number and gets pushed to the rightexponent
times (if the exponent is positive, zeroes will be added as the points stays on the right side). In the end, a minus sign will be put before your number according to theisNegative
flag.So for example, if you have
123 * 10 ^ (-3)
, it starts with123.
, goes over12.3
and1.23
to0.123
, where the decimal point is three digits left as beforeNote that this type also has restrictions. The reference shows them:
Maybe, Wikipedia is able to explain the scientific notation of numbers better, which is exactly what you have to use here :-)