vanbanTruong / Fairness-in-Large-Language-ModelsLinks
Fairness in LLMs resources
☆31Updated last week
Alternatives and similar repositories for Fairness-in-Large-Language-Models
Users that are interested in Fairness-in-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆55Updated 4 months ago
- A resource repository for machine unlearning in large language models☆473Updated last month
- ☆26Updated 2 months ago
- LLM Unlearning☆174Updated last year
- ☆144Updated last year
- ☆69Updated last month
- awesome SAE papers☆43Updated 3 months ago
- ☆38Updated last year
- ☆151Updated last year
- Official repository of "Can Language Models Solve Graph Problems in Natural Language?". NeurIPS 2023 (Spotlight)☆136Updated last year
- A resource repository for representation engineering in large language models☆131Updated 9 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆36Updated last year
- Code for paper: Are Large Language Models Post Hoc Explainers?☆33Updated last year
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆22Updated 6 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆67Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆51Updated 6 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 10 months ago
- A curated list of resources for activation engineering☆101Updated 3 months ago
- ☆20Updated last year
- ☆28Updated last year
- ☆21Updated 5 months ago
- ☆27Updated last year
- ☆16Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆53Updated 11 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆133Updated 11 months ago
- A survey on harmful fine-tuning attack for large language model☆205Updated this week
- ☆24Updated last year
- ☆39Updated 9 months ago
- [CIKM2024] Retrieval-enhanced Knowledge Editing in Language Models for Multi-Hop Question Answering☆35Updated 7 months ago
- A toolkit to assess data privacy in LLMs (under development)☆62Updated 7 months ago