TrustedLLM / UnKELinks
☆19Updated 4 months ago
Alternatives and similar repositories for UnKE
Users that are interested in UnKE are comparing it to the libraries listed below
Sorting:
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆35Updated 7 months ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆65Updated 3 weeks ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆27Updated 2 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 10 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 5 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆64Updated 6 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 6 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- ☆41Updated 8 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆76Updated 8 months ago
- ☆74Updated last year
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆19Updated 7 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆89Updated 4 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆112Updated last year
- ☆46Updated 7 months ago
- ☆65Updated 2 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 10 months ago
- ☆17Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆48Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- CMD: a framework for Context-aware Model self-Detoxification (EMNLP2024 Long Paper)☆16Updated 4 months ago
- ☆22Updated 3 months ago
- ☆44Updated 3 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆44Updated 7 months ago
- The reinforcement learning codes for dataset SPA-VL☆34Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 3 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago