TrustedLLM / UnKELinks
☆23Updated 10 months ago
Alternatives and similar repositories for UnKE
Users that are interested in UnKE are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Updated last year
- ☆57Updated 7 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆53Updated 2 weeks ago
- ☆24Updated 9 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆72Updated 8 months ago
- ☆60Updated 5 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆96Updated 10 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆66Updated 2 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆161Updated 7 months ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 4 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆87Updated last year
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆197Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 6 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆21Updated 2 months ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆19Updated 10 months ago
- ☆24Updated 4 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆146Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 5 months ago
- [ACL2025 Best Paper] Language Models Resist Alignment☆40Updated 7 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆96Updated 2 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 5 months ago
- ☆51Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆18Updated 10 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆50Updated 6 months ago