RayLLM - LLMs on Ray (Archived). Read README for more info.
☆1,266Mar 13, 2025Updated last year
Alternatives and similar repositories for ray-llm
Users that are interested in ray-llm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ cl…☆9,664Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,101Jun 30, 2025Updated 8 months ago
- A toolkit to run Ray applications on Kubernetes☆2,388Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆1,095Dec 9, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆74,135Updated this week
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆12,174Mar 16, 2026Updated last week
- A comprehensive guide to building RAG-based LLM applications for production.☆1,856Aug 2, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,148May 8, 2024Updated last year
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,479May 1, 2025Updated 10 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,169Updated this week
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,799Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Sep 23, 2025Updated 6 months ago
- Examples on how to use LangChain and Ray☆232Jun 14, 2023Updated 2 years ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,958Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,469Jul 17, 2025Updated 8 months ago
- Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)☆12,774Mar 11, 2026Updated 2 weeks ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- Structured Outputs☆13,588Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- Scale LLM Engine public repository☆822Updated this week
- A guidance language for controlling large language models.☆21,356Updated this week
- Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the …☆60Jun 20, 2023Updated 2 years ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,711Updated this week
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,964Jul 11, 2025Updated 8 months ago
- Tracking Ray Enhancement Proposals☆69Dec 17, 2025Updated 3 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,739May 21, 2025Updated 10 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- The Official Python Client for Lamini's API☆2,543Apr 7, 2025Updated 11 months ago
- DSPy: The framework for programming—not prompting—language models☆33,038Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,912Sep 30, 2023Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,445Jun 2, 2025Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- Universal LLM Deployment Engine with ML Compilation☆22,246Mar 18, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- A blazing fast inference solution for text embeddings models☆4,600Mar 13, 2026Updated last week