yxwan123 / BiasAskerLinks
☆38Updated 6 months ago
Alternatives and similar repositories for BiasAsker
Users that are interested in BiasAsker are comparing it to the libraries listed below
Sorting:
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Updated 2 years ago
- basically all the things I used for this article☆24Updated 6 months ago
- Multilingual safety benchmark for Large Language Models☆52Updated 10 months ago
- ☆30Updated 4 months ago
- ☆31Updated 4 months ago
- Benchmarking LLMs' Psychological Portrayal☆121Updated 6 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Benchmarking LLMs' Emotional Alignment with Humans☆105Updated 5 months ago
- ☆50Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆51Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆154Updated 4 months ago
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated 9 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆92Updated last month
- ☆92Updated 2 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 9 months ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆29Updated 11 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆137Updated 11 months ago
- ☆33Updated 9 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆104Updated 11 months ago
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆83Updated 2 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆102Updated last year
- The lastest paper about detection of LLM-generated text and code☆275Updated 3 weeks ago
- ☆26Updated 9 months ago
- [FCS'24] LVLM Safety paper☆18Updated 6 months ago
- Code and Results of the Paper: On the Resilience of Multi-Agent Systems with Malicious Agents☆24Updated 5 months ago
- LLM hallucination paper list☆319Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆78Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆80Updated 2 months ago
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21Updated 2 years ago