intel / llm-on-ray
Pretrain, finetune and serve LLMs on Intel platforms with Ray
☆126Updated this week
Alternatives and similar repositories for llm-on-ray:
Users that are interested in llm-on-ray are comparing it to the libraries listed below
- ☆53Updated 7 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆65Updated last year
- ☆50Updated 5 months ago
- Efficient and easy multi-instance LLM serving☆398Updated this week
- ☆250Updated last week
- A low-latency & high-throughput serving engine for LLMs☆351Updated 2 weeks ago
- LLM Serving Performance Evaluation Harness☆77Updated 2 months ago
- A throughput-oriented high-performance serving framework for LLMs☆804Updated this week
- ☆186Updated 7 months ago
- ☆417Updated this week
- Perplexity GPU Kernels☆272Updated this week
- ☆117Updated last year
- Materials for learning SGLang☆396Updated last week
- ☆205Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆99Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆67Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆456Updated last month
- Fast and memory-efficient exact attention☆68Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated last month
- A large-scale simulation framework for LLM inference☆371Updated 5 months ago
- Modular and structured prompt caching for low-latency LLM inference☆92Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆360Updated 2 weeks ago
- KV cache store for distributed LLM inference☆165Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆575Updated 3 weeks ago
- NVIDIA NCCL Tests for Distributed Training☆88Updated last week
- Serverless LLM Serving for Everyone.☆463Updated last week
- ☆45Updated 10 months ago
- The driver for LMCache core to run in vLLM☆38Updated 3 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆296Updated 2 weeks ago
- CUDA checkpoint and restore utility☆330Updated 3 months ago