General (language agnostic) reason
Since not all numbers can be represented exactly in IEEE floating point arithmetic (the standard that almost all computers use to represent decimal numbers and do math with them), you will not always get what you expected. This is especially true because some values which are simple, finite decimals (such as 0.1 and 0.05) are not represented exactly in the computer and so the results of arithmetic on them may not give a result that is identical to a direct representation of the "known" answer.
This is a well known limitation of computer arithmetic and is discussed in several places:
Comparing scalars
The standard solution to this in R
is not to use ==
, but rather the all.equal
function. Or rather, since all.equal
gives lots of detail about the differences if there are any, isTRUE(all.equal(...))
.
if(isTRUE(all.equal(i,0.15))) cat("i equals 0.15") else cat("i does not equal 0.15")
yields
i equals 0.15
Some more examples of using all.equal
instead of ==
(the last example is supposed to show that this will correctly show differences).
0.1+0.05==0.15
#[1] FALSE
isTRUE(all.equal(0.1+0.05, 0.15))
#[1] TRUE
1-0.1-0.1-0.1==0.7
#[1] FALSE
isTRUE(all.equal(1-0.1-0.1-0.1, 0.7))
#[1] TRUE
0.3/0.1 == 3
#[1] FALSE
isTRUE(all.equal(0.3/0.1, 3))
#[1] TRUE
0.1+0.1==0.15
#[1] FALSE
isTRUE(all.equal(0.1+0.1, 0.15))
#[1] FALSE
Some more detail, directly copied from an answer to a similar question:
The problem you have encountered is that floating point cannot represent decimal fractions exactly in most cases, which means you will frequently find that exact matches fail.
while R lies slightly when you say:
1.1-0.2
#[1] 0.9
0.9
#[1] 0.9
You can find out what it really thinks in decimal:
sprintf("%.54f",1.1-0.2)
#[1] "0.900000000000000133226762955018784850835800170898437500"
sprintf("%.54f",0.9)
#[1] "0.900000000000000022204460492503130808472633361816406250"
You can see these numbers are different, but the representation is a bit unwieldy. If we look at them in binary (well, hex, which is equivalent) we get a clearer picture:
sprintf("%a",0.9)
#[1] "0x1.ccccccccccccdp-1"
sprintf("%a",1.1-0.2)
#[1] "0x1.ccccccccccccep-1"
sprintf("%a",1.1-0.2-0.9)
#[1] "0x1p-53"
You can see that they differ by 2^-53
, which is important because this number is the smallest representable difference between two numbers whose value is close to 1, as this is.
We can find out for any given computer what this smallest representable number is by looking in R's machine field:
?.Machine
#....
#double.eps the smallest positive floating-point number x
#such that 1 + x != 1. It equals base^ulp.digits if either
#base is 2 or rounding is 0; otherwise, it is
#(base^ulp.digits) / 2. Normally 2.220446e-16.
#....
.Machine$double.eps
#[1] 2.220446e-16
sprintf("%a",.Machine$double.eps)
#[1] "0x1p-52"
You can use this fact to create a 'nearly equals' function which checks that the difference is close to the smallest representable number in floating point. In fact this already exists: all.equal
.
?all.equal
#....
#all.equal(x,y) is a utility to compare R objects x and y testing ‘near equality’.
#....
#all.equal(target, current,
# tolerance = .Machine$double.eps ^ 0.5,
# scale = NULL, check.attributes = TRUE, ...)
#....
So the all.equal function is actually checking that the difference between the numbers is the square root of the smallest difference between two mantissas.
This algorithm goes a bit funny near extremely small numbers called denormals, but you don't need to worry about that.
Comparing vectors
The above discussion assumed a comparison of two single values. In R, there are no scalars, just vectors and implicit vectorization is a strength of the language. For comparing the value of vectors element-wise, the previous principles hold, but the implementation is slightly different. ==
is vectorized (does an element-wise comparison) while all.equal
compares the whole vectors as a single entity.
Using the previous examples
a <- c(0.1+0.05, 1-0.1-0.1-0.1, 0.3/0.1, 0.1+0.1)
b <- c(0.15, 0.7, 3, 0.15)
==
does not give the "expected" result and all.equal
does not perform element-wise
a==b
#[1] FALSE FALSE FALSE FALSE
all.equal(a,b)
#[1] "Mean relative difference: 0.01234568"
isTRUE(all.equal(a,b))
#[1] FALSE
Rather, a version which loops over the two vectors must be used
mapply(function(x, y) {isTRUE(all.equal(x, y))}, a, b)
#[1] TRUE TRUE TRUE FALSE
If a functional version of this is desired, it can be written
elementwise.all.equal <- Vectorize(function(x, y) {isTRUE(all.equal(x, y))})
which can be called as just
elementwise.all.equal(a, b)
#[1] TRUE TRUE TRUE FALSE
Alternatively, instead of wrapping all.equal
in even more function calls, you can just replicate the relevant internals of all.equal.numeric
and use implicit vectorization:
tolerance = .Machine$double.eps^0.5
# this is the default tolerance used in all.equal,
# but you can pick a different tolerance to match your needs
abs(a - b) < tolerance
#[1] TRUE TRUE TRUE FALSE
This is the approach taken by dplyr::near
, which documents itself as
This is a safe way of comparing if two vectors of floating point numbers are (pairwise) equal. This is safer than using ==
, because it has a built in tolerance
dplyr::near(a, b)
#[1] TRUE TRUE TRUE FALSE
rbindlist
is an optimized version of do.call(rbind, list(...))
, which is known for being slow when using rbind.data.frame
Where does it really excel
Some questions that show where rbindlist
shines are
Fast vectorized merge of list of data.frames by row
Trouble converting long list of data.frames (~1 million) to single data.frame using do.call and ldply
These have benchmarks that show how fast it can be.
rbind.data.frame is slow, for a reason
rbind.data.frame
does lots of checking, and will match by name. (i.e. rbind.data.frame will account for the fact that columns may be in different orders, and match up by name), rbindlist
doesn't do this kind of checking, and will join by position
eg
do.call(rbind, list(data.frame(a = 1:2, b = 2:3), data.frame(b = 1:2, a = 2:3)))
## a b
## 1 1 2
## 2 2 3
## 3 2 1
## 4 3 2
rbindlist(list(data.frame(a = 1:5, b = 2:6), data.frame(b = 1:5, a = 2:6)))
## a b
## 1: 1 2
## 2: 2 3
## 3: 1 2
## 4: 2 3
Some other limitations of rbindlist
It used to struggle to deal with factors
, due to a bug that has since been fixed:
rbindlist two data.tables where one has factor and other has character type for a column (Bug #2650)
It has problems with duplicate column names
see
Warning message: in rbindlist(allargs) : NAs introduced by coercion: possible bug in data.table? (Bug #2384)
rbind.data.frame rownames can be frustrating
rbindlist
can handle lists
data.frames
and data.tables
, and will return a data.table without rownames
you can get in a muddle of rownames using do.call(rbind, list(...))
see
How to avoid renaming of rows when using rbind inside do.call?
Memory efficiency
In terms of memory rbindlist
is implemented in C
, so is memory efficient, it uses setattr
to set attributes by reference
rbind.data.frame
is implemented in R
, it does lots of assigning, and uses attr<-
(and class<-
and rownames<-
all of which will (internally) create copies of the created data.frame.
Best Answer
This question was answered in well in the comments by @James, pointing to an excellent explanation by Hadley Wickham of the dangers of
subset
(and functions like it) [here]. Go read it!It's a somewhat long read, so it may be helpful to record here the example that Hadley uses that most directly addresses the question of "what can go wrong?":
Hadley suggests the following example: suppose we want to subset and then reorder a data frame using the following functions:
This returns the error:
because R no longer "knows" where to find the object called 'cyl'. He also points out the truly bizarre stuff that can happen if by chance there is an object called 'cyl' in the global environment:
(Run them and see for yourself, it's pretty crazy.)