intel / llm-on-ray
Pretrain, finetune and serve LLMs on Intel platforms with Ray
☆110Updated this week
Alternatives and similar repositories for llm-on-ray:
Users that are interested in llm-on-ray are comparing it to the libraries listed below
- ☆52Updated 4 months ago
- ☆45Updated 2 months ago
- ☆223Updated this week
- The driver for LMCache core to run in vLLM☆26Updated last week
- Efficient and easy multi-instance LLM serving☆288Updated this week
- Benchmark suite for LLMs from Fireworks.ai☆65Updated this week
- ☆43Updated 7 months ago
- LLM Serving Performance Evaluation Harness☆66Updated 5 months ago
- A low-latency & high-throughput serving engine for LLMs☆308Updated 2 weeks ago
- ☆117Updated 10 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆61Updated 10 months ago
- Materials for learning SGLang☆253Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated 11 months ago
- ☆172Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆56Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆285Updated this week
- ☆157Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆88Updated this week
- Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the …☆55Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆107Updated 2 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆388Updated 3 months ago
- A large-scale simulation framework for LLM inference☆321Updated 2 months ago
- ☆117Updated 9 months ago
- Modular and structured prompt caching for low-latency LLM inference☆87Updated 3 months ago
- ☆65Updated 2 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆293Updated 7 months ago
- ☆127Updated last month
- A high-performance inference system for large language models, designed for production environments.☆411Updated this week
- ☆59Updated last week
- Making Long-Context LLM Inference 10x Faster and 10x Cheaper☆459Updated this week