liangyuwang / Tiny-MegatronLinks
Tiny-Megatron, a minimalistic re-implementation of the Megatron library
☆21Updated 4 months ago
Alternatives and similar repositories for Tiny-Megatron
Users that are interested in Tiny-Megatron are comparing it to the libraries listed below
Sorting:
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆49Updated 4 months ago
- qwen-nsa☆87Updated 3 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆270Updated last month
- analyse problems of AI with Math and Code☆27Updated 5 months ago
- ☆216Updated last month
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆283Updated 2 months ago
- Efficient Mixture of Experts for LLM Paper List☆154Updated 3 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆111Updated 3 weeks ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆123Updated last month
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated last year
- A lightweight Inference Engine built for block diffusion models☆39Updated last month
- Pipeline-Parallel Lecture: Simplest Dualpipe Implementation.☆31Updated 4 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆126Updated last month
- APRIL: Active Partial Rollouts in Reinforcement Learning to Tame Long-tail Generation. A system-level optimization for scalable LLM tra…☆45Updated 3 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆193Updated last month
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆145Updated 3 weeks ago
- ☆150Updated 6 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆98Updated 4 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Updated 3 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆53Updated 2 months ago
- [Archived] For the latest updates and community contribution, please visit: https://gitcode.com/Ascend/TransferQueue☆12Updated last week
- 16-fold memory access reduction with nearly no loss☆109Updated 9 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆263Updated 6 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆32Updated 9 months ago
- ☆444Updated 5 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆184Updated 3 months ago
- ☆45Updated last year
- 青稞Talk☆184Updated last week
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆83Updated last month
- ☆41Updated 10 months ago