Let’s take a stab at our first note on a topic that pre-establishing the definitions of probability model homotopy makes much easier to write. In this note we will discuss tailored probability models. There are models deliberately fit to training data that has an outcome prevalence equal to the expected […]
I am planning a new example-based series of articles using what I am calling probability model homotopy. This is a notation I am introducing to slow down and make clearer discussing how probability models perform on different populations.
Introduction I’d like to talk about the Kolmogorov Axioms of Probability as another example of revisionist history in mathematics (another example here). What is commonly quoted as the Kolmogorov Axioms of Probability is, in my opinion, a less insightful formulation than what is found in the 1956 English translation of […]
What we’ve got here is failure to communicate Suppose I were to say: “any natural number can be written uniquely, up to order, as a, possibly empty, finite product of prime number(s).” This seems possibly correct, and possibly even careful. Though, one may have to look up the terms (such […]
I am finishing up a work-note that has some really neat implications as to why working with AUC is more powerful than one might think. I think I am far enough along to share the consequences here. This started as some, now reappraised, thoughts on the fallacy of thinking knowing […]
I am working on a promising new series of notes: common data science fallacies and pitfalls. (Probably still looking for a good name for the series!) I thought I would share a few thoughts on it, and hopefully not jinx it too badly.
I’ve added a worked R example of the non-convexity, with respect to model parameters, of square loss of a sigmoid-derived prediction here. This is finishing an example for our Python note “Why not Square Error for Classification?”. Reading that note will give a usable context and background for this diagram. […]
Win Vector LLC has been developing and delivering a lot of “statistics, machine learning, and data science for engineers” intensives in the past few years. These are bootcamps, or workshops, designed to help software engineers become more comfortable with machine learning and artificial intelligence tools. The current thinking is: not […]
One of my favorite mathematical anecdotes is the following story that Gian-Carlo Rota told about Solomon Lefschetz: He [Solomon Lefschetz] liked to repeat, as an example of mathematical pedantry, the story of one of E. H. Moore’s visits to Princeton, when Moore started a lecture by saying, “Let a be […]
I’d like some feedback on a possible article or series. I am thinking about writing and/or recording videos on the measure theoretic foundations of probability. The idea is: empirical probability (probabilities of coin flips, dice rolls, and finite sequences) is fairly well taught and approachable. However, theoretical probability (the type […]