Do hyperparameter tuning and ML fairness mix?

This would be a great interview question

Nick Doiron
2 min readMar 21, 2021

I was skimming through Twitter and an unrelated post about keras-tuner got me thinking, should we use hyperparameter tuning for ML fairness? And why was my immediate reaction so negative?

I don’t intend to resolve it here, but think arguments could be made for either side, making it good for an ML interview question.

  1. Yes, because people have done this.
    An Amazon research paper (which won a best paper award in 2020!), AutoML project, and more happily explored this.
  2. Yes, because fairness is our goal.
    ML researchers use hyperparameter tuning to make their models more accurate. If we can define a fairness metric, there should be no difference optimizing for it and overall accuracy.
  3. Yes, because fairness shouldn’t be an afterthought.
    If you leave fairness for a post-training process, you are saying that the original model is fundamentally true, and fairness is an ‘adjustment’.
  4. No, because hyperparameter tuning is not robust. People are working on this, but building your model around accuracy and fairness on one dataset could backfire.
  5. No, because hyperparameter tuning automates away the problem of fairness. You’ve decided the problem of training / evaluating / adjusting the model for fairness can be folded into one metric and one automated training process / pipeline. Fairness should be a human process.
  6. No, because hyperparameter tuning is about finding the core architecture of our model, and everything else can be figured out later. If fairness is hugely affected by your hyperparameters, then likely everything about your model’s accuracy is not robust.
  7. No, because fairness is all about the analysis and post-processing. Sort of the opposite of #3, we might detect that our model has a statistical bias, and program it to accept different ranges of scores depending on the person. If we built this into the model itself, we cannot quantify or adjust this value.

--

--