yxwan123 / BiasAsker
☆35Updated 3 weeks ago
Alternatives and similar repositories for BiasAsker:
Users that are interested in BiasAsker are comparing it to the libraries listed below
- ☆29Updated 3 weeks ago
- basically all the things I used for this article☆24Updated 3 weeks ago
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Updated last year
- Multilingual safety benchmark for Large Language Models☆46Updated 4 months ago
- ☆28Updated 3 weeks ago
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆65Updated this week
- Benchmarking LLMs' Psychological Portrayal☆98Updated 3 weeks ago
- Benchmarking LLMs' Emotional Alignment with Humans☆90Updated 3 weeks ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆44Updated last year
- Code and Results of the Paper Titled: Revisiting the Reliability of Psychological Scales on Large Language Models☆29Updated 4 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆21Updated 10 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆58Updated this week
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆83Updated 4 months ago
- [FCS'24] LVLM Safety paper☆17Updated 3 weeks ago
- ☆28Updated 3 months ago
- Code and data for our paper "On the Resilience of Multi-Agent Systems with Malicious Agents"☆15Updated 2 weeks ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆14Updated 10 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆13Updated 2 months ago
- ☆24Updated last year
- The reinforcement learning codes for dataset SPA-VL☆27Updated 7 months ago
- ☆37Updated 7 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆22Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 3 months ago
- ☆38Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆76Updated 8 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆62Updated 11 months ago
- ☆20Updated 2 months ago
- ☆57Updated last month
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆65Updated 6 months ago