During recent years, the use of Machine Learning algorithms has grown constantly for multiple purposes. One of its key applications has been the use of automated decision models. The problem is those decisions can have a huge impact on different individuals as the decision-making algorithms are used in a wide range of areas, going from criminal justice, through education, banking, public health or social services.
The dilemma of whether the conclusions that these algorithms make are actually fair or not is a growing concern in this field of engineering, since, due to numerous reasons they can be biased or unfair. Multiple definitions and metrics for both fairness and bias have been proposed, although consensus hasn’t been reached yet, so evaluation of these systems is a complicated matter.
The main purpose of this work is to develop a tool to group different metrics and
evaluation methods to assess the fairness of our machine learning algorithms in a simple and understanding way.
In order to achieve this, the tool serves from four libraries (Aequitas, LIME, FairML and ThemisML), which combined provide the user with an accurate understanding of how the audited algorithm or data works and whether or not it should work that way, thus helping to determinate if the way the program works is actually unfair.
This tool can be consumed in two different ways. The first one, a Jupyter Notebook
where the user can change and modify values, which is oriented for people with technical skills, and secondly, a web platform with a friendly user interface where everything is explained accurately and oriented towards any group of population that wants to use the tool.