Modelwire
Subscribe

FairTree: Subgroup Fairness Auditing of Machine Learning Models with Bias-Variance Decomposition

Illustration accompanying: FairTree: Subgroup Fairness Auditing of Machine Learning Models with Bias-Variance Decomposition

FairTree, a new fairness auditing algorithm, detects performance disparities across ML model subgroups without requiring data discretization. It decomposes disparities into bias and variance components, addressing limitations of prior tools like SliceFinder that struggle with continuous features.

MentionsFairTree · SliceFinder · SliceLine

Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.

FairTree: Subgroup Fairness Auditing of Machine Learning Models with Bias-Variance Decomposition · Modelwire