xinzhel / LLM-SearchLinks
Survey on LLM Inference via Search (TMLR 2025)
☆14Updated 7 months ago
Alternatives and similar repositories for LLM-Search
Users that are interested in LLM-Search are comparing it to the libraries listed below
Sorting:
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 6 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆60Updated 2 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆82Updated this week
- Code repo for "Harnessing Negative Signals: Reinforcement Distillation from Teacher Data for LLM Reasoning"☆30Updated 4 months ago
- [ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model☆16Updated 9 months ago
- [AI4MATH@ICML2025] Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs☆41Updated 7 months ago
- ☆55Updated 2 years ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆44Updated 4 months ago
- ☆34Updated 7 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated last year
- ☆189Updated 7 months ago
- A Sober Look at Language Model Reasoning☆89Updated last month
- Official implementation of paper "Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models"☆52Updated last week
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"☆24Updated 9 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆150Updated 5 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆145Updated 5 months ago
- Must-read papers and blogs about parametric knowledge mechanism in LLMs.☆34Updated 7 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆285Updated last month
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆31Updated 8 months ago
- Code for the paper "VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool Use"☆144Updated 4 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Updated 7 months ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆120Updated 5 months ago
- One-shot Entropy Minimization☆187Updated 6 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆47Updated last year
- ☆70Updated 5 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated last month
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆235Updated last week
- ☆76Updated last year
- A lightweight Inference Engine built for block diffusion models☆37Updated last week