Pretrain, finetune and serve LLMs on Intel platforms with Ray
☆130Sep 23, 2025Updated 7 months ago
Alternatives and similar repositories for llm-on-ray
Users that are interested in llm-on-ray are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆14Jan 8, 2026Updated 3 months ago
- ☆15Mar 3, 2025Updated last year
- RayDP provides simple APIs for running Spark on Ray and integrating Spark with AI libraries.☆371Apr 10, 2026Updated 3 weeks ago
- Optimized Spark package to accelerate machine learning algorithms in Apache Spark MLlib.☆22Mar 24, 2026Updated last month
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,267Mar 13, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Spark* shuffle plugin for support shuffling data through a remote Hadoop-compatible file system, as opposed to vanilla Spark's local-dis…☆21Mar 15, 2024Updated 2 years ago
- Document Automation Reference Kit☆16Jun 27, 2024Updated last year
- A modular acceleration toolkit for big data analytic engines☆66May 6, 2024Updated 2 years ago
- oneCCL Bindings for Pytorch* (deprecated)☆104Dec 31, 2025Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆501Mar 24, 2025Updated last year
- ☆47Jun 27, 2024Updated last year
- ☆128Dec 24, 2024Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆214Sep 21, 2024Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Serving multiple LoRA finetuned LLM as one☆1,156May 8, 2024Updated last year
- Yet another coding assistant powered by LLM.☆16Sep 11, 2024Updated last year
- Large language model fine-tuning capabilities based on cloud native and distributed computing.☆92Feb 22, 2024Updated 2 years ago
- HeliosArtifact☆22Sep 27, 2022Updated 3 years ago
- A toolkit to run Ray applications on Kubernetes☆2,476Updated this week
- Machine Learning Inference Graph Spec☆21Jul 27, 2019Updated 6 years ago
- ☆13Jan 7, 2025Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Apr 6, 2024Updated 2 years ago
- On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning, …☆20Mar 13, 2026Updated last month
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- Python tools☆14Oct 22, 2023Updated 2 years ago
- Resources regarding evML (edge verified machine learning)☆23Jan 4, 2025Updated last year
- ☆64Apr 9, 2024Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- paper and code for New Directions in Cloud Programming, CIDR 2021☆11Feb 17, 2021Updated 5 years ago
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆314Jul 16, 2025Updated 9 months ago
- Perplexity GPU Kernels☆570Nov 7, 2025Updated 6 months ago
- wirefisher: eBPF-powered traffic monitoring and control with precise per-process, IP, and port-level filtering, plus built-in rate limiti…☆38Dec 26, 2025Updated 4 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- Testing various methods of moving Arrow data between processes☆16Mar 29, 2023Updated 3 years ago
- Mini-Engine Demonstration of Combining XeSS with VRS Tier 2.☆14Jan 26, 2026Updated 3 months ago
- Mirror of Plan 9 4th Edition from p9f☆14Mar 23, 2021Updated 5 years ago
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago