Anonymous1252022 / Megatron-DeepSpeedLinks
☆12Updated 10 months ago
Alternatives and similar repositories for Megatron-DeepSpeed
Users that are interested in Megatron-DeepSpeed are comparing it to the libraries listed below
Sorting:
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆48Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆108Updated last month
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated last week
- The evaluation framework for training-free sparse attention in LLMs☆86Updated last month
- ☆123Updated 2 months ago
- ☆83Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆92Updated last week
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 4 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆70Updated 3 weeks ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 9 months ago
- ☆81Updated last week
- ☆19Updated 7 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆50Updated 8 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆46Updated 2 months ago
- Work in progress.☆70Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆49Updated 9 months ago
- Long Context Extension and Generalization in LLMs☆58Updated 10 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆29Updated 2 months ago
- ☆54Updated 3 weeks ago
- ☆51Updated last month
- ☆23Updated last week
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 8 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- ☆15Updated 8 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆38Updated 11 months ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆61Updated 3 weeks ago