decoding-comp-trust / comp-trustLinks
Codebase for decoding compressed trust.
☆24Updated last year
Alternatives and similar repositories for comp-trust
Users that are interested in comp-trust are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- ☆23Updated 4 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆39Updated last month
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 9 months ago
- ☆38Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆80Updated 3 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆79Updated 8 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆30Updated 3 months ago
- ☆15Updated 10 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 9 months ago
- ☆44Updated 2 years ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆96Updated last week
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated last month
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆62Updated 6 months ago
- ☆44Updated 5 months ago
- ☆60Updated last year
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- ☆35Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- ☆55Updated 2 years ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆14Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆54Updated last year
- A Sober Look at Language Model Reasoning☆75Updated last month
- ☆41Updated 9 months ago
- ☆27Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- Test-time-training on nearest neighbors for large language models☆44Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆20Updated last month
- LLM Unlearning☆171Updated last year