poloclub / llm-landscape
NeurIPS'24 - LLM Safety Landscape
☆22Updated 2 months ago
Alternatives and similar repositories for llm-landscape:
Users that are interested in llm-landscape are comparing it to the libraries listed below
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 5 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 3 months ago
- ☆31Updated last year
- ☆42Updated 2 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆60Updated 3 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆26Updated 2 months ago
- ☆54Updated 2 years ago
- ☆14Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 5 months ago
- ☆42Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 2 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆77Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆95Updated 2 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆80Updated 11 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆37Updated 2 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Codebase for decoding compressed trust.☆23Updated 11 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆62Updated last year
- Exploration of automated dataset selection approaches at large scales.☆39Updated last month
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging☆20Updated 2 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆73Updated last year
- ☆14Updated last year
- ☆47Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 9 months ago
- ☆33Updated 4 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆70Updated last month
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆18Updated last year
- ☆21Updated 6 months ago