vanbanTruong / Fairness-in-Large-Language-ModelsLinks
Fairness in LLMs resources
☆35Updated last month
Alternatives and similar repositories for Fairness-in-Large-Language-Models
Users that are interested in Fairness-in-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- A curated list of resources for activation engineering☆106Updated 2 weeks ago
- A resource repository for representation engineering in large language models☆138Updated 11 months ago
- ☆154Updated last year
- ☆30Updated 4 months ago
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆57Updated last month
- A resource repository for machine unlearning in large language models☆495Updated 2 months ago
- awesome SAE papers☆48Updated 4 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 3 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆133Updated last year
- Accepted LLM Papers in NeurIPS 2024☆37Updated last year
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆39Updated last year
- ☆38Updated last year
- Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization☆20Updated 10 months ago
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆23Updated 8 months ago
- LLM Unlearning☆175Updated last year
- ☆28Updated last year
- Toolkit for evaluating the trustworthiness of generative foundation models.☆119Updated last month
- ☆152Updated 2 years ago
- [EMNLP 2024] "Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective"☆31Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆57Updated last year
- A curated list of Awesome-LLM-Ensemble papers for the survey "Harnessing Multiple Large Language Models: A Survey on LLM Ensemble"☆144Updated last week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆81Updated 7 months ago
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆25Updated 2 months ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆34Updated last year
- ☆21Updated last year
- ☆32Updated this week
- ☆82Updated 3 months ago
- Papers and online resources related to machine learning fairness☆73Updated 2 years ago
- ☆179Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 3 months ago