Diké
Collaborative project

Bias, fairness and ethics of compressed language models

Funded by ANR
(AAPG 2021, 2022-2025)

Rationale


Diké (Greek: goddess of justice) is a research project funded by the French National Research Agency (ANR) focusing on model compression effects in NLP.

Research objectives

  • Objective 1. To create bilingual datasets in English and French for bias, fairness and ethics evaluation in language models.
  • Objective 2. To devise evaluation metrics and to perform evaluation campaigns on compression techniques.
  • Objective 3. To propose new neural architectures for less biased, fairer and more ethical compression techniques.

Context

Large-scale models like BERT and GPT-3, while easily accessible, pose scalability challenges due to their immense size—ranging from 110 million to 175 billion parameters. Recent compression techniques aim to improve their usability by employing such methods as:

  • Pruning, a technique that aims at removing weights which are identified as less relevant for a trained model.
  • Weights Quantization, which is compressing each parameter of a model in a lower number of bits.
  • Distillation, consisting in training a smaller model, or “student model” that aims to mimic the predictions of a larger pre-trained model, or “teacher model” to ease training.

In that scientific context, the Diké project builds upon the following key observation. All compression techniques only focus on preserving the model accuracy on a given task. But there is no such thing as a free lunch: if we remove most of these weights (sometimes 99%!) and the accuracy of the model remains the same if not higher, we may be trading for model bias, fairness or ethics without knowing it. We study the hypothesis that what is discarded may be related to harmful side effects.

References

  • Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
  • Gupta, Manish, and Puneet Agrawal. “Compression of deep learning models for text: A survey.” ACM Transactions on Knowledge Discovery from Data (TKDD) 16.4 (2022): 1-55.
  • Nadezhda Chirkova, Ekaterina Lobacheva, and Dmitry Vetrov. 2018. Bayesian Compression for Natural Language Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2910–2915, Brussels, Belgium. Association for Computational Linguistics.
  • Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations
  • Hooker, Sara, et al. “What do compressed deep neural networks forget?.” arXiv preprint arXiv:1911.05248 (2019).
  • Hooker, Sara, et al. “Characterising bias in compressed models.” arXiv preprint arXiv:2010.03058 (2020).