Bias free confusion matrix metrics

Yes, it is a very helpful step-by-step guide to how to decide the question, how to aggregate, how often to score, how to sample, feature engineer and evaluate the model.

One suggestion I have is the following: We very commonly use Precision and Recall metrics based on confusion matrix measurements. In the example you’ve provided, the probability of being evil is close to 1/2, so these metrics are fine. Most of our intuition for these metrics is based on our experience with “near-half” probabilities. But as we know, these metrics are very easy to game with simple “algorithms” in real high-bias situations, where the probability of being evil or the conversion probability is low or high (p < 0.05 or p > 0.95). We can also get “good numbers” which are no better than random. So we should use bias free metrics based on the confusion matrix measurements for evaluating models. One good example is the ln(Diagnostic Odds Ratio), which is 0 for random algorithms. From a statistics point of view, (Relative) Odds Ratio is a better way of comparing probabilities which are very close to 0 or 1. From a decision-making standpoint, the odds ratio is proportional to the expected benefit-to-cost ratio and doesn’t entail knowing its actual value.

I stop to miau to cats.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store