sail-sg / VocabularyParallelism
Vocabulary Parallelism
☆17Updated last month
Alternatives and similar repositories for VocabularyParallelism:
Users that are interested in VocabularyParallelism are comparing it to the libraries listed below
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 10 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆101Updated 3 weeks ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆82Updated 2 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆113Updated 4 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- 16-fold memory access reduction with nearly no loss☆89Updated 3 weeks ago
- ☆68Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆33Updated 2 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆75Updated this week
- ☆82Updated 3 years ago
- Quantized Attention on GPU☆45Updated 4 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆51Updated 9 months ago
- Transformers components but in Triton☆32Updated last month
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆71Updated 2 months ago
- ☆39Updated last week
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆207Updated 7 months ago
- ☆57Updated last week
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 4 months ago
- ☆20Updated this week
- Efficient triton implementation of Native Sparse Attention.☆135Updated last week
- Estimate MFU for DeepSeekV3☆22Updated 3 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆25Updated last year
- ☆21Updated 2 weeks ago
- ☆122Updated 2 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- Distributed IO-aware Attention algorithm☆19Updated 7 months ago