TrustedLLM / UnKELinks
☆22Updated 9 months ago
Alternatives and similar repositories for UnKE
Users that are interested in UnKE are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆86Updated 11 months ago
- Project of ACL 2025 "UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models"☆14Updated 8 months ago
- ☆56Updated 4 months ago
- ☆25Updated 7 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated 11 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆194Updated 2 weeks ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- ☆56Updated 5 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆70Updated 4 months ago
- ☆10Updated 7 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆91Updated last month
- AnyEdit: Edit Any Knowledge Encoded in Language Models, ICML 2025☆33Updated 3 weeks ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆87Updated 8 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 9 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆72Updated 7 months ago
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆19Updated last month
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Updated last year
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆45Updated 7 months ago
- A Unified Framework for High-Performance and Extensible LLM Steering☆131Updated last week
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 3 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆189Updated 5 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆143Updated 6 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆48Updated 5 months ago
- ☆67Updated 7 months ago
- The official GitHub repository of the paper "Recent advances in large langauge model benchmarks against data contamination: From static t…☆47Updated 2 months ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 3 months ago