TrustedLLM / UnKELinks
☆22Updated 10 months ago
Alternatives and similar repositories for UnKE
Users that are interested in UnKE are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆90Updated last year
- ☆57Updated 5 months ago
- ☆55Updated 6 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆235Updated last week
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to …☆45Updated this week
- ☆24Updated 8 months ago
- ☆10Updated 7 months ago
- The official GitHub repository of the paper "Recent advances in large langauge model benchmarks against data contamination: From static t…☆47Updated 3 months ago
- ☆42Updated 2 weeks ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆67Updated last month
- Project of ACL 2025 "UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models"☆14Updated 8 months ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 3 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆85Updated 10 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- ☆24Updated 4 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆72Updated 5 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- ☆51Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆49Updated 6 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆196Updated 2 weeks ago
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆40Updated 8 months ago
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆18Updated 10 months ago
- ☆70Updated 8 months ago
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆52Updated 6 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆156Updated 7 months ago
- This is the repository of DEER, a Dynamic Early Exit in Reasoning method for Large Reasoning Language Models.☆177Updated 5 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆98Updated 10 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago