JiahaoYao / awesome-ray
Ray - A curated list of resources: https://github.com/ray-project/ray
☆57Updated 3 months ago
Alternatives and similar repositories for awesome-ray:
Users that are interested in awesome-ray are comparing it to the libraries listed below
- Notebooks for the O'Reilly book "Learning Ray"☆295Updated last year
- Tracking Ray Enhancement Proposals☆53Updated last month
- Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the …☆56Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆126Updated last week
- This is suite of the hands-on training materials that shows how to scale CV, NLP, time-series forecasting workloads with Ray.☆392Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 5 months ago
- Introduction to Ray Core Design Patterns and APIs.☆68Updated last year
- Pygloo provides Python bindings for Gloo.☆22Updated 2 months ago
- Distributed XGBoost on Ray☆148Updated 10 months ago
- Ray-based Apache Beam runner☆42Updated last year
- FIL backend for the Triton Inference Server☆77Updated last week
- Serverless Python with Ray☆55Updated 2 years ago
- ☆117Updated last year
- RayDP provides simple APIs for running Spark on Ray and integrating Spark with AI libraries.☆332Updated 2 weeks ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆65Updated last year
- An introductory tutorial about leveraging Ray core features for distributed patterns.☆78Updated last year
- ☆23Updated this week
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- MLFlow Deployment Plugin for Ray Serve☆44Updated 3 years ago
- ☆50Updated 5 months ago
- Fine-tuning LLMs on Flyte and Union Cloud☆28Updated last year
- ☆27Updated last year
- Distributed ML Optimizer☆32Updated 3 years ago
- ☆45Updated this week
- The Triton backend for the PyTorch TorchScript models.☆148Updated this week
- Fine-tune an LLM to perform batch inference and online serving.☆110Updated this week
- LLM Serving Performance Evaluation Harness☆77Updated 2 months ago
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.☆18Updated 2 years ago
- Deadline-based hyperparameter tuning on RayTune.☆31Updated 5 years ago
- ☆250Updated last week