Jihuai-wpy / InferAlignerLinks
☆34Updated 10 months ago
Alternatives and similar repositories for InferAligner
Users that are interested in InferAligner are comparing it to the libraries listed below
Sorting:
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- ☆51Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆82Updated 4 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆94Updated 2 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆95Updated 5 months ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- ☆49Updated 3 weeks ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆71Updated 7 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆156Updated 5 months ago
- ☆15Updated 2 months ago
- Accepted by ECCV 2024☆147Updated 9 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆142Updated 2 weeks ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆143Updated 3 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆77Updated 2 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆45Updated 8 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆68Updated last week
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆50Updated 3 months ago
- ☆36Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆81Updated 5 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆112Updated last week
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 10 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆64Updated 2 months ago
- ☆60Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆65Updated 5 months ago
- Multilingual safety benchmark for Large Language Models☆52Updated 11 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆29Updated 5 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year