vanbanTruong / Fairness-in-Large-Language-ModelsLinks
Fairness in LLMs resources
☆32Updated last month
Alternatives and similar repositories for Fairness-in-Large-Language-Models
Users that are interested in Fairness-in-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆56Updated 2 weeks ago
- ☆148Updated 2 years ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆33Updated last year
- A resource repository for representation engineering in large language models☆136Updated 10 months ago
- ☆29Updated 3 months ago
- awesome SAE papers☆45Updated 3 months ago
- LLM Unlearning☆174Updated last year
- ☆38Updated last year
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆76Updated 10 months ago
- ☆76Updated 2 months ago
- A curated list of resources for activation engineering☆102Updated 3 months ago
- ☆153Updated last year
- Official code for "Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning", ICL…☆17Updated 4 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 11 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆37Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆103Updated 10 months ago
- [ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案☆23Updated 7 months ago
- [EMNLP 2024] "Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective"☆28Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆55Updated last year
- A toolkit to assess data privacy in LLMs (under development)☆62Updated 8 months ago
- ☆28Updated last year
- A resource repository for machine unlearning in large language models☆484Updated 2 months ago
- ☆24Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆52Updated 7 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆25Updated 10 months ago
- ☆174Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆132Updated last year
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆25Updated last month
- ☆16Updated last year
- ☆20Updated last year