yueliu1999 / Awesome-Efficient-Inference-for-LRMsLinks
Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Models (LRMs). It contains papers, codes, datasets, evaluations, and analyses. https://arxiv.org/pdf/2503.23077
☆79Updated last month
Alternatives and similar repositories for Awesome-Efficient-Inference-for-LRMs
Users that are interested in Awesome-Efficient-Inference-for-LRMs are comparing it to the libraries listed below
Sorting:
- ☆252Updated last month
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 2 weeks ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆71Updated 7 months ago
- ☆47Updated 3 weeks ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆64Updated last month
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆17Updated 5 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 10 months ago
- ☆52Updated last month
- Accepted LLM Papers in NeurIPS 2024☆37Updated 9 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆293Updated 3 weeks ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆75Updated last month
- ☆155Updated 2 months ago
- Awesome Low-Rank Adaptation☆40Updated 2 weeks ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆20Updated 8 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆112Updated last week
- ☆102Updated 2 months ago
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆19Updated this week
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆68Updated this week
- Toolkit for evaluating the trustworthiness of generative foundation models.☆107Updated this week
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆277Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆66Updated 8 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆55Updated last month
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆142Updated 2 weeks ago
- Pytorch implementation of Tree Preference Optimization (TPO) (Accepyed by ICLR'25)☆20Updated 3 months ago
- ☆49Updated 8 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 9 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆39Updated 4 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆68Updated 4 months ago
- A curated list of resources for activation engineering☆99Updated 2 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆65Updated 5 months ago