It is common to use idioms such as:
x/60.0
to force a floating-point division when x
is an integer in languages which do not have distinct operators for integer and decimal division.
Is this an accepted idiom or is it preferrable to cast x
to a floating-point type?
I believe it is a concise and clear idiom- it might be show intent less clearly than an explicit cast, but it is more concise and easier to read. I think that it's less hack-ish than x + ""
to convert to string or x+0.0
to convert to floating point (as there's no unnecessary "NOOP").
Thoughts?
Álex
Best Answer
The return result will be language specific based upon how the language handles implicit conversions.
That having been said, if your language of choice will support returning a float from that implicit conversion, then yes, your example is a pretty common way of triggering the float point arithmetic. It's quick, clean, and clear.
Some languages will require the explicit cast, so the norm in those cases would obviously be the explicit cast.
Update: Alex asked:
And my preference is for the first, as this example will explain. Please note that my comments are still targeted to languages that allow the implicit conversion. Languages that require an explicit conversion are out of scope for the answer.
vs.
Why?