ISEEKYAN / mbridgeLinks
Bridge Megatron-Core to Hugging Face/Reinforcement Learning
☆103Updated this week
Alternatives and similar repositories for mbridge
Users that are interested in mbridge are comparing it to the libraries listed below
Sorting:
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆155Updated last week
- (best/better) practices of megatron on veRL and tuning guide☆82Updated 3 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆108Updated 4 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆64Updated this week
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆61Updated last week
- Estimate MFU for DeepSeekV3☆24Updated 7 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆214Updated last year
- ☆41Updated 3 months ago
- A simple calculation for LLM MFU.☆44Updated 5 months ago
- Async pipelined version of Verl☆117Updated 4 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆109Updated 5 months ago
- ☆276Updated last month
- 16-fold memory access reduction with nearly no loss☆104Updated 5 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆201Updated 6 months ago
- Allow torch tensor memory to be released and resumed later☆115Updated 2 weeks ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆238Updated last month
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆49Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆327Updated last month
- ☆42Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆138Updated this week
- ☆78Updated 4 months ago
- ☆117Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆136Updated 3 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆82Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆110Updated 3 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆147Updated 3 weeks ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆207Updated 8 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆59Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆58Updated 9 months ago