feifeibear / Odysseus-Transformer
Odysseus: Playground of LLM Sequence Parallelism
☆53Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for Odysseus-Transformer
- PyTorch bindings for CUTLASS grouped GEMM.☆51Updated last week
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- GPTQ inference TVM kernel☆35Updated 6 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆40Updated 3 months ago
- Triton implementation of Flash Attention2.0☆22Updated last year
- ☆41Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆67Updated 3 months ago
- ☆79Updated 2 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆32Updated 3 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆34Updated 8 months ago
- ☆63Updated 3 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 5 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆74Updated 3 weeks ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆57Updated this week
- Quantized Attention on GPU☆29Updated this week
- A sparse attention kernel supporting mix sparse patterns☆52Updated 3 weeks ago
- ☆55Updated 5 months ago
- ☆11Updated last year
- ☆88Updated 2 months ago
- Implementation of Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting☆41Updated 4 months ago
- Transformers components but in Triton☆23Updated this week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆35Updated 3 weeks ago
- ☆29Updated 5 months ago
- ☆22Updated 10 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆195Updated last week
- ☆70Updated 2 years ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆196Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆146Updated 3 months ago