## Bounding Excess Generalization Error

I am sharing a new free video where I work through a great common argument that bounds expected excess generalization error as a ratio of model complexity (in rows) over training set size (again in rows), independent of problem dimension. (link) For more of my notes on support vector machines […]

## Happy Anniversary Practical Data Science with R 2nd Edition!

Our book, Practical Data Science with R, just had its first year anniversary! The book is doing great, if you are working with R and data I recommend you check it out. (link)

## The Purpose of our Data Science Chalk Talk Series

I’d like to share an introduction to my data science chalk talk series (video link, series link)

## New Free Video Lecture: Estimating the Odds with Bayes’ Law

I am excited to share my new free video lecture: Estimating the Odds with Bayes’ Law. (link)

## A Single Parameter Family Characterizing Probability Model Performance

Introduction We’ve been writing on the distribution density shapes expected for probability models in ROC (receiver operator characteristic) plots, double density plots, and normal/logit-normal densities frameworks. I thought I would re-approach the issue with a specific family of examples.

## The Double Density Plot Contains a Lot of Useful Information

The double density plot contains a lot of useful information. This is a plot that shows the distribution of a continuous model score, conditioned on the binary categorical outcome to be predicted. As with most density plots: the y-axis is an abstract quantity called density picked such that the area […]

## Your Lopsided Model is Out to Get You

For classification problems I argue one of the biggest steps you can take to improve the quality and utility of your models is to prefer models that return scores or return probabilities instead of classification rules. Doing this also opens a second large opportunity for improvement: working with your domain […]

## The Shift and Balance Fallacies

Two related fallacies I see in machine learning practice are the shift and balance fallacies (for an earlier simple fallacy, please see here). They involve thinking logistic regression has a bit simpler structure that it actually does, and also thinking logistic regression is a bit less powerful than it actually […]

## Surgery on ROC Plots

This note is a little break from our model homotopy series. I have a neat example where one combines two classifiers to get a better classifier using a method I am calling “ROC surgery.” In ROC surgery we look at multiple ROC plots and decide we want to cut out […]