apple / ml-recurrent-drafterLinks
☆219Updated 11 months ago
Alternatives and similar repositories for ml-recurrent-drafter
Users that are interested in ml-recurrent-drafter are comparing it to the libraries listed below
Sorting:
- scalable and robust tree-based speculative decoding algorithm☆365Updated 10 months ago
- Ship correct and fast LLM kernels to PyTorch☆126Updated this week
- Fast low-bit matmul kernels in Triton☆410Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆275Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆89Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆220Updated this week
- Load compute kernels from the Hub☆352Updated last week
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 weeks ago
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- Efficient LLM Inference over Long Sequences☆394Updated 5 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Updated 2 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- ring-attention experiments☆160Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆257Updated 2 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- TPU inference for vLLM, with unified JAX and PyTorch support.☆199Updated this week
- Official implementation for Training LLMs with MXFP4☆115Updated 7 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆249Updated last year
- ☆610Updated last week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- 👷 Build compute kernels☆195Updated this week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆209Updated this week
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆254Updated 6 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆354Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year