yxwan123 / BiasAskerLinks
☆38Updated 7 months ago
Alternatives and similar repositories for BiasAsker
Users that are interested in BiasAsker are comparing it to the libraries listed below
Sorting:
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Updated 2 years ago
- Multilingual safety benchmark for Large Language Models☆52Updated 11 months ago
- basically all the things I used for this article☆24Updated 7 months ago
- ☆30Updated 5 months ago
- ☆32Updated 5 months ago
- Benchmarking LLMs' Psychological Portrayal☆121Updated 7 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Benchmarking LLMs' Emotional Alignment with Humans☆107Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 10 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆156Updated 5 months ago
- ☆51Updated last year
- ☆34Updated 10 months ago
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆85Updated 3 months ago
- ☆99Updated 3 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- ☆33Updated last month
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated 10 months ago
- [FCS'24] LVLM Safety paper☆18Updated 7 months ago
- ☆19Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆79Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆94Updated 2 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- Project of ACL 2025 "UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models"☆13Updated 4 months ago
- ☆46Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆97Updated 5 months ago
- ☆79Updated 7 months ago
- ☆24Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆63Updated 2 months ago
- ☆74Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆53Updated last year