LavinWong / Fairness-in-Large-Language-Models
Fairness in LLMs resources
☆22Updated 3 months ago
Alternatives and similar repositories for Fairness-in-Large-Language-Models:
Users that are interested in Fairness-in-Large-Language-Models are comparing it to the libraries listed below
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆52Updated 3 weeks ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆31Updated 9 months ago
- ☆131Updated last year
- [CIKM 2023] Towards Fair Graph Neural Networks via Graph Counterfactual.☆13Updated last month
- ☆54Updated last month
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆40Updated 2 months ago
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆63Updated 5 months ago
- Using Explanations as a Tool for Advanced LLMs☆60Updated 7 months ago
- ☆18Updated 3 years ago
- ☆38Updated last year
- ☆19Updated last month
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆47Updated 7 months ago
- ☆26Updated this week
- awesome SAE papers☆26Updated 2 months ago
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆99Updated 5 months ago
- repository for Causal&NLP works☆10Updated 2 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆112Updated 7 months ago
- ☆35Updated 6 months ago
- An official PyTorch implementation of "Certifiably Robust Graph Contrastive Learning" (NeurIPS 2023)☆10Updated last year
- Official implementation for KDD'22 paper "Learning Fair Representation via Distributional Contrastive Disentanglement"☆23Updated 2 years ago
- ☆16Updated last year
- ☆26Updated last year
- ☆18Updated last year
- [ICML2024] "LLaGA: Large Language and Graph Assistant", Runjin Chen, Tong Zhao, Ajay Jaiswal, Neil Shah, Zhangyang Wang☆109Updated 7 months ago
- ☆16Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆80Updated 11 months ago
- ☆49Updated last year
- A curated list of Awesome-LLM-Ensemble papers for the survey "Harnessing Multiple Large Language Models: A Survey on LLM Ensemble"☆37Updated this week
- SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence!☆9Updated 2 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago