This note shares an experiment comparing the performance of a number of data processing systems available in
R. Our notional or example problem is finding the top ranking item per group (group defined by three string columns, and order defined by a single numeric column). This is a common and often needed task.
First let’s compare three methods on the same grouped ranking problem.
R" (term defined as
Rplus just core packages, earlier results here). We are using
base::order()with the option "
method = "auto"" (as described here).
- The seemingly silly idea of using
reticulateto ship the data to
Python, and then using
Pandasto do the work, and finally bring the result back to
We will plot the run-times (in seconds) of these three solutions to the same task as a function of the number of rows in the problem. For all tasks shorter run-times (being lower on the graph) is better. Since we are plotting a large range of values (1 through 100,000,000 rows) we will present the data as a "log-log" plot.
dplyr is slower (higher up on the graph) than base
R for all problem scales tested (1 row through 100,000,000 rows). Height differences on a
log-y scaled graph such as this represent ratios of run-times and we can see the ratio of
dplyr to base-
R runtime is large (often over 40 to 1).
Also notice by the time we get the problem size up to 5,000 rows even sending the data to
Python and back for
Pandas processing is faster than
Note: in this article "
pandas timing" means the time it would take an
R process to use
Pandas for data manipulation. This includes the extra overhead of moving the data from
Pandas and back. This is always going to be slower than
Pandas itself as it includes extra overhead. We are not saying
R users should round trip their data through
Python and (as we will discuss later) these performance numbers alone are not a reason for
R users to switch to
Python. It does indicate that clients may not always be well-served by a pure-
dplyr or pure-
tidyverse approach. As an
R advocate, I like
R to have its best fair chance in the market, regardless of loyalty or dis-loyalty to any one set of packages.
All runs were performed on an Amazon EC2
r4.8xlarge (244 GiB RAM) 64-bit Ubuntu Server 16.04 LTS (HVM), SSD Volume Type – ami-ba602bc2. We used R 3.4.4, with all packages current as of 8-20-2018 (the date of the experiment).
We are not testing
dtplyr for the simple reason it did not work with the
dplyr pipeline as written. We demonstrate this issue below.
ds <- mk_data(3) dplyr_pipeline <- . %>% group_by(col_a, col_b, col_c) %>% arrange(col_x) %>% filter(row_number() == 1) %>% ungroup() %>% arrange(col_a, col_b, col_c, col_x) ds %>% dplyr_pipeline
## # A tibble: 3 x 4 ## col_a col_b col_c col_x ## <chr> <chr> <chr> <dbl> ## 1 sym_1 sym_1 sym_1 0.751 ## 2 sym_2 sym_1 sym_1 0.743 ## 3 sym_2 sym_2 sym_1 0.542
ds %>% as.data.table() %>% dplyr_pipeline
## Error in data.table::is.data.table(data): argument "x" is missing, with no default
It is important to note the reason base-
R is in the running is that Matt Dowle and Arun Srinivasan of the
data.table team generously ported their radix sorting code into base-
R. Please see
help(sort) for details. This sharing of one of
data.table‘s more important features (fast radix sorting) back into
R itself is a very big deal.
For our example we used what I consider a natural or idiomatic
dplyr solution to the problem. We saw that code or pipeline just above. That code may not be preferred, as
dplyr has known (unfixed) issues with filtering in the presence of grouping. Let’s try to work around that with the following code (pivoting as many operations out of the grouped data section of the pipeline as practical).
d %>% arrange(col_x) %>% group_by(col_a, col_b, col_c) %>% mutate(rn = row_number()) %>% ungroup() %>% filter(rn == 1) %>% select(col_a, col_b, col_c, col_x) %>% arrange(col_a, col_b, col_c, col_x)
We will call the above solution "
dplyr_b". A new comparison including "
dplyr_b" is given below.
In the above graph we added
data.table results and left out the earlier
Pandas results. It is already known that working with
R is typically competitive with (and sometimes faster than) working with
python (some results are given here, here); so
R users should not be seriously considering round-tripping their data through
Python to get access to
Pandas, and (at least with
R users should not have data manipulation performance as a reason to abandon
There are at least 2 ways to think about the relation of the
dplyr_b solutions. One interpretation is we found a way to speed up our
dplyr code by a factor of 5. The other interpretation is that small variations in
dplyr pipeline specification can easily affect your run-times by a factor of 5. At no scale tested does either of the
dplyr solutions match the performance of either of base-
data.table. The ratio of the runtime of the first (or more natural)
dplyr solution over the
data.table runtime (
data.table being by far the best solution) is routinely over 80 to 1.
We can take a closer look at the ratio of run-times. In our next graph we present the ratio two
dplyr solution run times to the
data.table solution run-time. We will call the ratio of the runtime of the first
dplyr solution over the
data.table run time "
ratio_a"; and call the ratio of the runtime of the second (improved)
dplyr solution over the
data.table run time "
A practical lesson is to look at is what happens at 5 million rows (times in seconds).
At this scale
data.table takes about 1 second. Base-
R takes about 2 seconds (longer, but tolerable).
dplyr takes 90 to 17 seconds (depending on which variation you use). These are significantly different user experiences. We have also included the timing for
rqdatatable, which relies on
data.table as its implementation and has some data-copying overhead (in this case leading to a total runtime of 3 seconds).
In our simple example we have seen very large differences in performance driven by seemingly small code changes. This emphasizes the need to benchmark one’s own tasks and workflows. Choosing tools based on mere preference or anecdote may not be safe. Also, even if one does not perform such tests, clients often do see and experience overall run times when scheduling jobs and provisioning infrastructure. Even if you do not measure, somebody else may be measuring later.
We must emphasize that performance of these systems will vary from example to example. However, the above results are consistent with what we have seen (informally) in production systems. In comparing performance one should look to primary sources (experiments actually run, such as this) over repeating indirect and unsupported (in the sense of no shared code or data) claims (or at least run such claims down to their primary sources).
Full results are below (and all code and results are here and here). Times below are reported in seconds.
Categories: Pragmatic Data Science Tutorials
Data Scientist and trainer at Win Vector LLC. One of the authors of Practical Data Science with R.
A couple of comments about the code used. First, your dplyr pipelines arrange two times. If you want everything sorted at the end, I think that it’s more efficienty to arrange just once at the beginning. Second, for data frames, I find more natural to use “slice” instead of filtering by row number.
Also, there is a third way not considered here that is 1) even more idiomatic and 2) faster: summarising by the minimum of the grouped data frame! Here are my pipeline versions (trying a “pre” HTML block, not sure if it will work…):
Thanks for the pipelines. These are variations one can try, they may or may not help enough to close the huge runtime ratios.
I hadn’t thought of using
slice()for the simple reason I do a lot of
slice()is not available, one usually gets the error message:
Error in UseMethod("slice_") : no applicable method for 'slice_' applied to an object of class "c('tbl_dbi', 'tbl_sql', 'tbl_lazy', 'tbl')"). That may or may not represent a significant speedup for in-memory work.
I think at best the second pipeline would speed us up by at most a factor of 2 (that would be under the untested assumption most of the time is in sorting and that having one sorting stage instead of 2 is indeed half that work). So that one is good, but not enough to fix the large run time ratios we are seeing (around 50 times).
summarize()works the exact problem at hand, but won’t fit my (unfortunately unstated application needs) of bringing over other columns from the winning rows (there were to such columns in the minimized example, but there ofter are in such applications). So it is a good idea, but not as powerful as picking by row-number.
Of course the above is all just speculation. When tested, none of these variations produced a
dplyrtiming that was competitive with base-
data.table(not repeating the reticulate timing, as that is not a serious alternative). Running them all to small scale gives the following results.
Notice all the new
dplyrvariations form a pack (or group) on the graph and do not approach the performance of base
data.table. The problem was not fixed by more
Details here and here.