radixark / milesLinks
Miles is an enterprise-facing reinforcement learning framework for large-scale MoE post-training and production workloads, forked from and co-evolving with slime.
☆744Updated last week
Alternatives and similar repositories for miles
Users that are interested in miles are comparing it to the libraries listed below
Sorting:
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆891Updated last week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆264Updated last month
- ☆952Updated 2 months ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆386Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆771Updated last week
- PyTorch-native post-training at scale☆600Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆553Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆639Updated this week
- LLM KV cache compression made easy☆858Updated this week
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆287Updated 2 months ago
- Scalable toolkit for efficient model reinforcement☆1,252Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆272Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,486Updated this week
- JAX backend for SGL☆227Updated this week
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆491Updated 2 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆252Updated this week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆351Updated this week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆250Updated last week
- Block Diffusion for Ultra-Fast Speculative Decoding☆349Updated 3 weeks ago
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆185Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆379Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆333Updated 2 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆247Updated last year
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆467Updated 8 months ago
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆599Updated this week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆218Updated 7 months ago
- Perplexity GPU Kernels☆554Updated 2 months ago
- kernels, of the mega variety☆652Updated 3 months ago
- Materials for learning SGLang☆725Updated 3 weeks ago
- Async RL Training at Scale☆1,020Updated this week