sail-sg / Cheating-LLM-BenchmarksLinks
[ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)
☆84Updated last year
Alternatives and similar repositories for Cheating-LLM-Benchmarks
Users that are interested in Cheating-LLM-Benchmarks are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆99Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated 9 months ago
- ☆41Updated last year
- Codebase for decoding compressed trust.☆24Updated last year
- ☆58Updated 2 years ago
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆20Updated last month
- ☆51Updated last year
- NeurIPS'24 - LLM Safety Landscape☆30Updated last week
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆75Updated 9 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆121Updated 7 months ago
- ☆37Updated 10 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆40Updated 11 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆62Updated 4 months ago
- ☆32Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆116Updated 8 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆23Updated 5 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- ☆33Updated 9 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 4 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆117Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆52Updated 10 months ago
- Improving Alignment and Robustness with Circuit Breakers☆238Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 11 months ago
- ☆66Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- ☆25Updated 7 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆33Updated 8 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆87Updated 5 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year