radixark / milesLinks
Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.
☆830Updated last week
Alternatives and similar repositories for miles
Users that are interested in miles are comparing it to the libraries listed below
Sorting:
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆902Updated last week
- ☆961Updated 3 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago
- PyTorch-native post-training at scale☆613Updated this week
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆419Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆792Updated 3 weeks ago
- Scalable toolkit for efficient model reinforcement☆1,293Updated last week
- LLM KV cache compression made easy☆876Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆676Updated last week
- JAX backend for SGL☆234Updated this week
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆191Updated last week
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆391Updated this week
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆248Updated last year
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆495Updated 2 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆289Updated 3 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,547Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- kernels, of the mega variety☆665Updated last week
- Perplexity GPU Kernels☆560Updated 3 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆361Updated this week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆288Updated this week
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆825Updated 2 weeks ago
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆410Updated last month
- Materials for learning SGLang☆738Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆273Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆625Updated 2 weeks ago