intel / llm-on-ray
Pretrain, finetune and serve LLMs on Intel platforms with Ray
☆121Updated 3 weeks ago
Alternatives and similar repositories for llm-on-ray:
Users that are interested in llm-on-ray are comparing it to the libraries listed below
- ☆54Updated 6 months ago
- Efficient and easy multi-instance LLM serving☆332Updated this week
- A low-latency & high-throughput serving engine for LLMs☆319Updated last month
- Materials for learning SGLang☆335Updated 3 weeks ago
- The driver for LMCache core to run in vLLM☆34Updated last month
- ☆237Updated last week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆64Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆58Updated this week
- ☆48Updated 4 months ago
- Benchmark suite for LLMs from Fireworks.ai☆69Updated last month
- LLM Serving Performance Evaluation Harness☆70Updated 3 weeks ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆310Updated this week
- ☆116Updated last year
- ☆44Updated 8 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆400Updated 2 weeks ago
- GLake: optimizing GPU memory management and IO transmission.☆436Updated 3 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆495Updated 7 months ago
- A large-scale simulation framework for LLM inference☆346Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMs☆766Updated 5 months ago
- ☆61Updated 3 weeks ago
- A collection of all available inference solutions for the LLMs☆81Updated 2 weeks ago
- ☆179Updated 5 months ago
- ☆170Updated last week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆289Updated last month
- Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the …☆55Updated last year
- NVIDIA NCCL Tests for Distributed Training☆84Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆149Updated 5 months ago
- ☆117Updated 10 months ago
- Modular and structured prompt caching for low-latency LLM inference☆89Updated 4 months ago