[ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)
☆84Oct 23, 2024Updated last year
Alternatives and similar repositories for Cheating-LLM-Benchmarks
Users that are interested in Cheating-LLM-Benchmarks are comparing it to the libraries listed below
Sorting:
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆37Jan 23, 2024Updated 2 years ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 9 months ago
- ☆20Apr 16, 2025Updated 10 months ago
- [ICLR 2025] A Closer Look at Machine Unlearning for Large Language Models☆45Dec 4, 2024Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Apr 15, 2025Updated 10 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 10 months ago
- 🔱 Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs☆71Mar 21, 2025Updated 11 months ago
- ☆43May 23, 2023Updated 2 years ago
- ☆37Dec 19, 2024Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Feb 9, 2023Updated 3 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- Forcing Diffuse Distributions out of Language Models☆18Sep 10, 2024Updated last year
- Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness☆45Apr 26, 2024Updated last year
- ☆17Feb 4, 2025Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- A Self-Consistent Robust Error (ICML 2022)☆69Jun 25, 2023Updated 2 years ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 2 months ago
- Official code for "On Calibrating Diffusion Probabilistic Models"☆30Feb 22, 2023Updated 3 years ago
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 10 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆214Jan 15, 2025Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Mar 21, 2025Updated 11 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 5 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Apr 8, 2024Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆633Jan 29, 2026Updated last month
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- Graph Diffusion Policy Optimization☆42Mar 17, 2024Updated last year
- ☆196Nov 26, 2023Updated 2 years ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Sep 26, 2024Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆156Jul 8, 2025Updated 7 months ago
- ☆20Mar 14, 2022Updated 3 years ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆116Jun 13, 2024Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆52Jul 15, 2025Updated 7 months ago