Our group has written a lot on calibration of models and even conditional calibration of models. In our last note we mentioned the possibility of “fully calibrated models.” This note is an example of a probability model that is calibrated in the traditional sense, but not fully calibrated in a […]

Estimated reading time: 4 minutes

Two related fallacies I see in machine learning practice are the shift and balance fallacies (for an earlier simple fallacy, please see here). They involve thinking logistic regression has a bit simpler structure that it actually does, and also thinking logistic regression is a bit less powerful than it actually […]

Estimated reading time: 7 minutes

Let’s take a stab at our first note on a topic that pre-establishing the definitions of probability model homotopy makes much easier to write. In this note we will discuss tailored probability models. There are models deliberately fit to training data that has an outcome prevalence equal to the expected […]

Estimated reading time: 4 minutes

A common mis-understanding of linear regression and logistic regression is that the intercept is thought to encode the unconditional mean or the training data prevalence. This is easily seen to not be the case. Consider the following example in R. library(wrapr) We set up our example data. # build our […]

Estimated reading time: 1 minute

I’ve added a worked R example of the non-convexity, with respect to model parameters, of square loss of a sigmoid-derived prediction here. This is finishing an example for our Python note “Why not Square Error for Classification?”. Reading that note will give a usable context and background for this diagram. […]

Estimated reading time: 59 seconds

Win Vector LLC has been developing and delivering a lot of “statistics, machine learning, and data science for engineers” intensives in the past few years. These are bootcamps, or workshops, designed to help software engineers become more comfortable with machine learning and artificial intelligence tools. The current thinking is: not […]

Estimated reading time: 2 minutes

There’s a common, yet easy to fix, mistake that I often see in machine learning and data science projects and teaching: using classification rules for classification problems. This statement is a bit of word-play which I will need to unroll a bit. However, the concrete advice is that you often […]

Estimated reading time: 6 minutes

One of the chapters that we are especially proud of in Practical Data Science with R is Chapter 7, “Linear and Logistic Regression.” We worked really hard to explain the fundamental principles behind both methods in a clear and easy-to-understand form, and to document diagnostics returned by the R implementations […]

Estimated reading time: 52 seconds

In our previous post in this series, we introduced sessionization, or converting log data into a form that’s suitable for analysis. We looked at basic considerations, like dealing with time, choosing an appropriate dataset for training models, and choosing appropriate (and achievable) business goals. In that previous example, we sessionized […]

Estimated reading time: 22 minutes

When we teach data science we emphasize the data scientist’s responsibility to transform available data from multiple systems of record into a wide or denormalized form. In such a “ready to analyze” form each individual example gets a row of data and every fact about the example is a column. […]

Estimated reading time: 25 minutes