jianghoucheng / AlphaEdit
☆70Updated last month
Alternatives and similar repositories for AlphaEdit:
Users that are interested in AlphaEdit are comparing it to the libraries listed below
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆104Updated 10 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆39Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 10 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆69Updated 4 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆103Updated 5 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆32Updated 6 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆110Updated 3 months ago
- ☆18Updated 4 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆52Updated 4 months ago
- ☆37Updated last year
- ☆123Updated 6 months ago
- ☆49Updated 3 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆49Updated 4 months ago
- A Survey on the Honesty of Large Language Models☆53Updated 2 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆48Updated 2 months ago
- Code and Data Repo for [ICLR 2025] Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆20Updated 2 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆35Updated last week
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆67Updated 4 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆130Updated 5 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆45Updated 5 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆53Updated 10 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 3 months ago
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated 8 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆68Updated last month
- LoFiT: Localized Fine-tuning on LLM Representations☆33Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆95Updated 4 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆28Updated 3 months ago