This is a short note on what machine learning fitting actually does.
We usually teach:
A correct statistical or machine learning fitting procedure will, with high probability, correctly identify or infer a system that is close to the one actually producing our training examples.
For this to actually happen we need the actual system to be in our concept space, a lot of training data, and an abundance of caution.
In practice what we see more and more is the training procedure in fact attacks the evaluation procedure. It doesn’t just improve the quality of the fit artifact, but through mere optimization accidentally exploits weaknesses in the measurement system itself. When this happens, fitting does the following.
Machine learning fitting produces a model that is, to the training system, superficially indistinguishable from the behavior it was intended to imitate.
The more your evaluation environment fails to imitate the diversity of the larger real world, the more vulnerable your procedures are to this effect.
In my opinion, this is why we so often see intelligence attributed to chat systems. My favorite examples being ELIZA and GPT3 (which are easier to reason about), and not the current Google LaMDA silliness. I suspect in this instance we are not yet seeing early artificial general intellegence, but seeing what attempted science without pre-registered hypothesis or meaningful control experiments looks like. The instances where one sees something good, or even accidentally anthropomorphizes, have no counter-balance to allow meaningful evaluation. With no ability to evaluate the evaluation procedure, we have no evaluation procedure.
Or to sum up (with a not quite-on attempt at a Dr. Strangelove reference):
Fitting is the art of producing, in the mind of the enemy, the FEAR to reject your model!
Fitting is not always much more, unless one is careful and respectful of the actual domain one is working in.
Categories: Opinion
Agree, but ML in practice is often much worse than “indistinguishable from the behavior it was intended to imitate” because of the gap between the intended system and the aspirational system. When model teams don’t take care to craft the training data and the evaluation to reflect the aspirational goals of the models, then ML is no better than data fitting. The concept of ‘aspirational modeling’ is discussed in Brian Christian’s “The Alignment Problem”.
Thanks for the point and interesting reference.