AI45Lab / REEFLinks
The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source LLMs.
☆58Updated 6 months ago
Alternatives and similar repositories for REEF
Users that are interested in REEF are comparing it to the libraries listed below
Sorting:
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆65Updated 5 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- ☆28Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆143Updated 3 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆81Updated 5 months ago
- ☆47Updated 5 months ago
- ☆24Updated 5 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆111Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆21Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆103Updated 3 weeks ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 10 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆98Updated 5 months ago
- ☆24Updated 5 months ago
- ☆34Updated 10 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- ☆33Updated 9 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆68Updated this week
- 【ACL 2024】 SALAD benchmark & MD-Judge☆156Updated 4 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆63Updated 2 months ago
- ☆27Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- ☆28Updated 4 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆81Updated 4 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆22Updated 2 months ago
- ☆155Updated 2 months ago
- [NeurIPS 2024] HonestLLM: Toward an Honest and Helpful Large Language Model☆26Updated last month
- ☆15Updated 2 months ago