zhisbug / ray-scalable-ml-designLinks
Some microbenchmarks and design docs before commencement
☆12Updated 4 years ago
Alternatives and similar repositories for ray-scalable-ml-design
Users that are interested in ray-scalable-ml-design are comparing it to the libraries listed below
Sorting:
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆17Updated 3 years ago
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- Distributed ML Optimizer☆35Updated 4 years ago
- Make triton easier☆50Updated last year
- Deadline-based hyperparameter tuning on RayTune.☆32Updated 6 years ago
- A lightweight, user-friendly data-plane for LLM training.☆38Updated 4 months ago
- A Ray-based data loader with per-epoch shuffling and configurable pipelining, for shuffling and loading training data for distributed tra…☆18Updated 3 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- An experimental implementation of compiler-driven automatic sharding of models across a given device mesh.☆51Updated this week
- ☆10Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- This repository contains statistics about the AI Infrastructure products.☆17Updated 11 months ago
- Hyperparameter tuning via uncertainty modeling☆49Updated last year
- Distributed DRL by Ray and TensorFlow Tutorial.☆10Updated 6 years ago
- ☆43Updated 4 months ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆70Updated 2 weeks ago
- ☆31Updated 9 months ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 7 months ago
- [NeurIPS 2022] DreamShard: Generalizable Embedding Table Placement for Recommender Systems☆29Updated 2 years ago
- sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data☆66Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- PyTorch centric eager mode debugger☆48Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Updated 2 weeks ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆61Updated 2 months ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Updated 3 years ago
- benchmarking some transformer deployments☆26Updated last month