Skip to content

fairness_metrics

allennlp.fairness.fairness_metrics

[SOURCE]


Fairness metrics are based on:

  1. Barocas, S.; Hardt, M.; and Narayanan, A. 2019. Fairness and machine learning.

  2. Zhang, B. H.; Lemoine, B.; and Mitchell, M. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 335-340.

  3. Hardt, M.; Price, E.; Srebro, N.; et al. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 3315–3323.

  4. Beutel, A.; Chen, J.; Zhao, Z.; and Chi, E. H. 2017. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075.

It is provably impossible (pg. 18) to satisfy any two of Independence, Separation, and Sufficiency simultaneously, except in degenerate cases.

Independence

@Metric.register("independence")
class Independence(Metric):
 | def __init__(
 |     self,
 |     num_classes: int,
 |     num_protected_variable_labels: int,
 |     dist_metric: str = "kl_divergence"
 | ) -> None

Independence (pg. 9) measures the statistical independence of the protected variable from predictions. It has been explored through many equivalent terms or variants, such as demographic parity, statistical parity, group fairness, and disparate impact.

Parameters

  • num_classes : int
    Number of classes.
  • num_protected_variable_labels : int
    Number of protected variable labels.
  • dist_metric : str
    Distance metric (kl_divergence, wasserstein) for calculating the distance between the distribution over predicted labels and the distribution over predicted labels given a sensitive attribute.

Note

Assumes integer labels, with each item to be classified having a single correct class.

__call__

class Independence(Metric):
 | ...
 | def __call__(
 |     self,
 |     predicted_labels: torch.Tensor,
 |     protected_variable_labels: torch.Tensor,
 |     mask: Optional[torch.BoolTensor] = None
 | ) -> None

Parameters

  • predicted_labels : torch.Tensor
    A tensor of predicted integer class labels of shape (batch_size, ...). Represented as C.
  • protected_variable_labels : torch.Tensor
    A tensor of integer protected variable labels of shape (batch_size, ...). It must be the same shape as the predicted_labels tensor. Represented as A.
  • mask : torch.BoolTensor, optional (default = None)
    A tensor of the same shape as predicted_labels.

Note

All tensors are expected to be on the same device.

get_metric

class Independence(Metric):
 | ...
 | def get_metric(
 |     self,
 |     reset: bool = False
 | ) -> Dict[int, torch.FloatTensor]

Returns

  • distances : Dict[int, torch.FloatTensor]
    A dictionary mapping each protected variable label a to the KL divergence or Wasserstein distance of P(C | A = a) from P(C). A distance of nearly 0 implies fairness on the basis of Independence.

reset

class Independence(Metric):
 | ...
 | def reset(self) -> None

Separation

@Metric.register("separation")
class Separation(Metric):
 | def __init__(
 |     self,
 |     num_classes: int,
 |     num_protected_variable_labels: int,
 |     dist_metric: str = "kl_divergence"
 | ) -> None

Separation (pg. 12) allows correlation between the predictions and the protected variable to the extent that it is justified by the gold labels.

Parameters

  • num_classes : int
    Number of classes.
  • num_protected_variable_labels : int
    Number of protected variable labels.
  • dist_metric : str
    Distance metric (kl_divergence, wasserstein) for calculating the distance between the distribution over predicted labels given a gold label and a sensitive attribute from the distribution over predicted labels given only the gold label. If both distributions do not have equal support, you should use wasserstein distance.

Note

Assumes integer labels, with each item to be classified having a single correct class.

__call__

class Separation(Metric):
 | ...
 | def __call__(
 |     self,
 |     predicted_labels: torch.Tensor,
 |     gold_labels: torch.Tensor,
 |     protected_variable_labels: torch.Tensor,
 |     mask: Optional[torch.BoolTensor] = None
 | ) -> None

Parameters

  • predicted_labels : torch.Tensor
    A tensor of predicted integer class labels of shape (batch_size, ...). Represented as C.
  • gold_labels : torch.Tensor
    A tensor of ground-truth integer class labels of shape (batch_size, ...). It must be the same shape as the predicted_labels tensor. Represented as Y.
  • protected_variable_labels : torch.Tensor
    A tensor of integer protected variable labels of shape (batch_size, ...). It must be the same shape as the predicted_labels tensor. Represented as A.
  • mask : torch.BoolTensor, optional (default = None)
    A tensor of the same shape as predicted_labels.

Note

All tensors are expected to be on the same device.

get_metric

class Separation(Metric):
 | ...
 | def get_metric(
 |     self,
 |     reset: bool = False
 | ) -> Dict[int, Dict[int, torch.FloatTensor]]

Returns

  • distances : Dict[int, Dict[int, torch.FloatTensor]]
    A dictionary mapping each class label y to a dictionary mapping each protected variable label a to the KL divergence or Wasserstein distance of P(C | A = a, Y = y) from P(C | Y = y). A distance of nearly 0 implies fairness on the basis of Separation.

Note

If a class label is not present in Y conditioned on a protected variable label, the expected behavior is that the KL divergence corresponding to this (class label, protected variable label) pair is NaN. You can avoid this by using Wasserstein distance instead.

reset

class Separation(Metric):
 | ...
 | def reset(self) -> None

Sufficiency

@Metric.register("sufficiency")
class Sufficiency(Metric):
 | def __init__(
 |     self,
 |     num_classes: int,
 |     num_protected_variable_labels: int,
 |     dist_metric: str = "kl_divergence"
 | ) -> None

Sufficiency (pg. 14) is satisfied by the predictions when the protected variable and gold labels are clear from context.

Parameters

  • num_classes : int
    Number of classes.
  • num_protected_variable_labels : int
    Number of protected variable labels.
  • dist_metric : str
    Distance metric (kl_divergence, wasserstein) for calculating the distance between the distribution over gold labels given a predicted label and a sensitive attribute from the distribution over gold labels given only the predicted label. If both distributions do not have equal support, you should use wasserstein distance.

Note

Assumes integer labels, with each item to be classified having a single correct class.

__call__

class Sufficiency(Metric):
 | ...
 | def __call__(
 |     self,
 |     predicted_labels: torch.Tensor,
 |     gold_labels: torch.Tensor,
 |     protected_variable_labels: torch.Tensor,
 |     mask: Optional[torch.BoolTensor] = None
 | ) -> None

Parameters

  • predicted_labels : torch.Tensor
    A tensor of predicted integer class labels of shape (batch_size, ...). Represented as C.
  • gold_labels : torch.Tensor
    A tensor of ground-truth integer class labels of shape (batch_size, ...). It must be the same shape as the predicted_labels tensor. Represented as Y.
  • protected_variable_labels : torch.Tensor
    A tensor of integer protected variable labels of shape (batch_size, ...). It must be the same shape as the predicted_labels tensor. Represented as A.
  • mask : torch.BoolTensor, optional (default = None)
    A tensor of the same shape as predicted_labels.

Note

All tensors are expected to be on the same device.

get_metric

class Sufficiency(Metric):
 | ...
 | def get_metric(
 |     self,
 |     reset: bool = False
 | ) -> Dict[int, Dict[int, torch.FloatTensor]]

Returns

  • distances : Dict[int, Dict[int, torch.FloatTensor]]
    A dictionary mapping each class label c to a dictionary mapping each protected variable label a to the KL divergence or Wasserstein distance of P(Y | A = a, C = c) from P(Y | C = c). A distance of nearly 0 implies fairness on the basis of Sufficiency.

Note

If a possible class label is not present in C, the expected behavior is that the KL divergences corresponding to this class label are NaN. If a possible class label is not present in C conditioned on a protected variable label, the expected behavior is that the KL divergence corresponding to this (class label, protected variable label) pair is NaN. You can avoid this by using Wasserstein distance instead.

reset

class Sufficiency(Metric):
 | ...
 | def reset(self) -> None