hanshen95 / SEALLinks
An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.
☆17Updated 4 months ago
Alternatives and similar repositories for SEAL
Users that are interested in SEAL are comparing it to the libraries listed below
Sorting:
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆23Updated 2 weeks ago
- A survey on harmful fine-tuning attack for large language model☆193Updated 2 weeks ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆44Updated 8 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆60Updated last month
- ☆31Updated last month
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆29Updated 3 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆65Updated this week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆68Updated 6 months ago
- ☆51Updated last month
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆23Updated 10 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆76Updated last month
- Accepted by ECCV 2024☆142Updated 9 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆142Updated 2 months ago
- ☆50Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆80Updated 3 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 9 months ago
- ☆42Updated this week
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆30Updated 3 months ago
- ☆33Updated 9 months ago
- Toolkit for evaluating the trustworthiness of generative foundation models.☆106Updated 3 weeks ago
- ☆21Updated 4 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆235Updated last month
- ☆22Updated 4 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆52Updated 2 weeks ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆57Updated 4 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆134Updated this week
- ☆26Updated 10 months ago
- ☆44Updated last year
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆48Updated 2 months ago
- ☆47Updated 7 months ago