Consider the problem of “parametric programming” in R. That is: simply writing correct code before knowing some details, such as the names of the columns your procedure will have to be applied to in the future. Our latest version of
replyr::let makes such programming easier.
(edit: great news! CRAN just accepted our
replyr 0.2.0 fix release!)
Please read on for examples comparing standard notations and
Suppose, for example, your task was to and build a new advisory column that tells you which values in a column of a
data.frame are missing or
NA. We will illustrate this in R using the example data given below:
d <- data.frame(x = c(1, NA)) print(d) # x # 1 1 # 2 NA
Performing an ad hoc analysis is trivial in
R: we would just directly write:
d$x_isNA <- is.na(d$x)
We used the fact that we are looking at the data interactively to note the only column is “
x”, and then picked “
x_isNA” as our result name. If we want to use
dplyr the notation remains straightforward:
library("dplyr") # # Attaching package: 'dplyr' # The following objects are masked from 'package:stats': # # filter, lag # The following objects are masked from 'package:base': # # intersect, setdiff, setequal, union d %>% mutate(x_isNA = is.na(x)) # x x_isNA # 1 1 FALSE # 2 NA TRUE
Now suppose, as is common in actual data science and data wrangling work, we are not the ones picking the column names. Instead suppose we are trying to produce reusable code to perform this task again and again on many data sets. In that case we would then expect the column names to be given to us as values inside other variables (i.e., as parameters).
cname <- "x" # column we are examining rname <- paste(cname, "isNA", sep= '_') # where to land results print(rname) #  "x_isNA"
And writing the matching code is again trivial:
d[[rname]] <- is.na(d[[cname]])
We are now programming at a slightly higher level, or automating tasks. We don’t need to type in new code each time a new data set with a different column name comes in. It is now easy to write a
lapply over a list of columns to analyze many columns in a single data set. It is an absolute travesty when something that is purely virtual (such as formulas and data) can not be automated over. So the slightly clunkier “
[]” notation (which can be automated) is a necessary complement to the more convenient “
$” notation (which is too specific to be easily automated over).
dplyr directly (when you know all the names) is deliberately straightforward, but programming over
dplyr can become a challenge.
The standard parametric
dplyr practice is to use
dplyr::mutate_ (the standard evaluation or parametric variation of
dplyr::mutate). Unfortunately the notation in using such an “underbar form” is currently cumbersome.
You have the choice building up your formula through variations of one of:
- A formula
- A string
Let us try a few of these to try and emphasize we are proposing a new solution, not because we do not know of the current solutions, but instead because we are familiar with the current solutions.
Formula interface is a nice option as it is
R’s common way for holding names unevaluated. The code looks like the following (edit: but does not work for
d %>% mutate_(RCOL = lazyeval::interp(~ is.na(cname))) %>% rename_(.dots = stats::setNames('RCOL', rname)) # x x_isNA # 1 1 FALSE # 2 NA FALSE
(edit: looks like the following actually works
d %>% mutate_(RCOL = lazyeval::interp(~ is.na(VAR), VAR=as.name(cname))) %>% rename_(.dots = stats::setNames('RCOL', rname))
mutate_ does not take “two-sided formulas” so we need to control names outside of the formula. In this case we used the explicit
dplyr::rename_ because attempting to name the assignment in-line does not seem to be supported (or if it is supported, it uses a different notation or convention than the one we have just seen, edit: also not working for
# the following does not correctly name the result column d %>% mutate_(.dots = stats::setNames(lazyeval::interp( ~ is.na(cname)), rname)) # x is.na(cname) # 1 1 FALSE # 2 NA FALSE
quote() can delay evaluation, but isn’t the right tool for parameterizing (what the linked NSE reference called “mixing constants and variable”). We have a hard time getting control of incoming and outgoing variables.
# dplyr mutate_ quote non-solution (hard coded x, failed to name result) d %>% mutate_(.dots = stats::setNames(quote(is.na(x)), rname)) # x is.na(x) # 1 1 FALSE # 2 NA TRUE
My point is: even if this is something that you know how to accomplish, this is evidence we are really trying to swim upstream with this notation.
String based solutions can involve using
paste to get parameter values into the strings. Here is an example:
# dplyr mutate_ paste stats::setNames solution d %>% mutate_(.dots = stats::setNames(paste0('is.na(', cname, ')'), rname)) # x x_isNA # 1 1 FALSE # 2 NA TRUE
Or just using strings as an interface to control
# dplyr mutate_ lazyeval::interp solution d %>% mutate_(RCOL = lazyeval::interp("is.na(cname)", cname = as.name(cname))) %>% rename_(.dots = setNames('RCOL', rname)) # x x_isNA # 1 1 FALSE # 2 NA TRUE