decoding-comp-trust / comp-trustLinks
Codebase for decoding compressed trust.
☆24Updated last year
Alternatives and similar repositories for comp-trust
Users that are interested in comp-trust are comparing it to the libraries listed below
Sorting:
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 6 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆99Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated last year
- ☆15Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated 9 months ago
- ☆18Updated last year
- NeurIPS'24 - LLM Safety Landscape☆30Updated 8 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆85Updated 6 months ago
- ☆43Updated 2 years ago
- ☆41Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 2 months ago
- ☆66Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 5 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆56Updated 2 years ago
- ☆41Updated last year
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆20Updated 6 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆23Updated 5 months ago
- A Sober Look at Language Model Reasoning☆85Updated 2 weeks ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆45Updated 5 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆87Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆157Updated 6 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆46Updated last year
- ☆58Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- ☆28Updated 7 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆83Updated last year
- ☆30Updated last year
- ☆20Updated 11 months ago