Jikai0Wang / OPT-TreeView external linksLinks
☆28May 24, 2025Updated 8 months ago
Alternatives and similar repositories for OPT-Tree
Users that are interested in OPT-Tree are comparing it to the libraries listed below
Sorting:
- Continuous Pipelined Speculative Decoding☆16Jan 4, 2026Updated last month
- ☆14Aug 19, 2024Updated last year
- Multi-Candidate Speculative Decoding☆39Apr 22, 2024Updated last year
- ☆20May 14, 2025Updated 9 months ago
- ☆66Nov 4, 2024Updated last year
- ☆22Mar 7, 2025Updated 11 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆21Oct 10, 2024Updated last year
- ☆32Oct 13, 2025Updated 4 months ago
- ☆64Dec 3, 2024Updated last year
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆49Jul 15, 2025Updated 6 months ago
- minimal C implementation of speculative decoding based on llama2.c☆25Jul 15, 2024Updated last year
- Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments☆48Jan 8, 2026Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- ☆14Jan 24, 2025Updated last year
- ☆46Jun 11, 2025Updated 8 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆46Dec 9, 2023Updated 2 years ago
- MegaRAG: Multimodal Graph-based RAG☆32Sep 16, 2025Updated 4 months ago
- The implementation of RAGSynth: Synthetic Data for Robust and Faithful RAG Component Optimization☆21May 26, 2025Updated 8 months ago
- Official implementation of Self-Taught Agentic Long Context Understanding (ACL 2025).☆12Sep 22, 2025Updated 4 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- This is the codebase for pre-training, compressing, extending, and distilling LLMs with Megatron-LM.☆12Mar 11, 2024Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆15Jul 10, 2025Updated 7 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- ☆15Apr 11, 2024Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Jul 17, 2024Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆73Jul 14, 2025Updated 7 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆30Nov 24, 2024Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆220May 31, 2025Updated 8 months ago
- ☆21Jul 18, 2024Updated last year
- ☆32Aug 26, 2025Updated 5 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 8 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Nov 17, 2024Updated last year
- Official Code Repository for [AutoScale📈: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025*…☆13Aug 8, 2025Updated 6 months ago
- OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.☆19Oct 14, 2024Updated last year
- A comprehensive and efficient long-context model evaluation framework☆30Updated this week
- Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs☆34Sep 3, 2024Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year