oxfordinternetinstitute / oxonfairLinks
Fairness toolkit for pytorch, scikit learn and autogluon
☆32Updated 9 months ago
Alternatives and similar repositories for oxonfair
Users that are interested in oxonfair are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated last year
- Datasets derived from US census data☆268Updated last year
- PAIR.withgoogle.com and friend's work on interpretability methods☆202Updated last week
- Code to reproduce data for Bias in Bios☆47Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- [NeurIPS 2021] WRENCH: Weak supeRvision bENCHmark☆224Updated last year
- Achieve error-rate fairness between societal groups for any score-based classifier.☆19Updated 3 weeks ago
- StereoSet: Measuring stereotypical bias in pretrained language models☆190Updated 2 years ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆124Updated 2 years ago
- quica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected…☆23Updated 4 years ago
- A curated list of programmatic weak supervision papers and resources☆190Updated 2 years ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆24Updated 5 months ago
- Conformal Language Modeling☆32Updated last year
- A curated list of awesome datasets with human label variation (un-aggregated labels) in Natural Language Processing and Computer Vision, …☆93Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 4 months ago
- ☆40Updated 6 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆60Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- ☆90Updated 3 years ago
- A python package for benchmarking interpretability techniques on Transformers.☆214Updated 11 months ago
- Papers on fairness in NLP☆448Updated last year
- Learning Gender-Neutral Word Embeddings☆47Updated 5 years ago
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.☆105Updated last year
- Aligning AI With Shared Human Values (ICLR 2021)☆297Updated 2 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆68Updated 4 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆35Updated 7 months ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆96Updated last year
- ☆26Updated 2 years ago
- A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.☆20Updated 5 months ago