decoding-comp-trust / comp-trustLinks
Codebase for decoding compressed trust.
☆25Updated last year
Alternatives and similar repositories for comp-trust
Users that are interested in comp-trust are comparing it to the libraries listed below
Sorting:
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆47Updated 8 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆65Updated last year
- NeurIPS'24 - LLM Safety Landscape☆36Updated 2 months ago
- ☆41Updated 2 years ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆23Updated 7 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated 11 months ago
- ☆43Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆88Updated 8 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Updated 2 years ago
- ☆69Updated last year
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆52Updated 7 months ago
- ☆18Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆85Updated last year
- ☆16Updated last year
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆28Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 7 months ago
- ☆43Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- ☆31Updated 9 months ago
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆18Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆117Updated 10 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆39Updated 4 months ago
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Updated last year
- ☆26Updated 9 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆90Updated last year
- ☆37Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year