Regularization is a way of avoiding overfit by restricting the magnitude of model coefficients (or in deep learning, node weights). A simple example of regularization is the use of ridge or lasso regression to fit linear models in the presence of collinear variables or (quasi-)separation. The intuition is that smaller […]
Estimated reading time: 14 minutes
[Reader’s Note. Some of our articles are applied and some of our articles are more theoretical. The following article is more theoretical, and requires fairly formal notation to even work through. However, it should be of interest as it touches on some of the fine points of cross-validation that are […]
Estimated reading time: 3 minutes
I (Nina Zumel) will be speaking at the Women who Code Silicon Valley meetup on Thursday, October 27. The talk is called Improving Prediction using Nested Models and Simulated Out-of-Sample Data. In this talk I will discuss nested predictive models. These are models that predict an outcome or dependent variable […]
Estimated reading time: 2 minutes