ray-project / ray-llm
RayLLM - LLMs on Ray
☆1,257Updated 8 months ago
Alternatives and similar repositories for ray-llm:
Users that are interested in ray-llm are comparing it to the libraries listed below
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,790Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,965Updated last week
- LLMPerf is a library for validating and benchmarking LLMs☆749Updated 2 months ago
- Serving multiple LoRA finetuned LLM as one☆1,028Updated 9 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,362Updated last week
- Scale LLM Engine public repository☆791Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,842Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆687Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,194Updated 4 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆913Updated 3 months ago
- A tiny library for coding with large language models.☆1,223Updated 7 months ago
- ☆448Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,426Updated 7 months ago
- Customizable implementation of the self-instruct paper.☆1,038Updated 11 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,421Updated 10 months ago
- Efficient Retrieval Augmentation and Generation Framework☆1,458Updated last month
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,704Updated this week
- A high-performance inference system for large language models, designed for production environments.☆413Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- Minimalistic large language model 3D-parallelism training☆1,483Updated this week
- batched loras☆338Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,456Updated 6 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,823Updated last year
- ☆539Updated 2 months ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,915Updated last month
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,609Updated 7 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆717Updated 8 months ago
- ☆502Updated 5 months ago
- A blazing fast inference solution for text embeddings models☆3,175Updated 3 weeks ago
- ☆446Updated last year