ray-project / ray-llmLinks
RayLLM - LLMs on Ray (Archived). Read README for more info.
☆1,261Updated 3 months ago
Alternatives and similar repositories for ray-llm
Users that are interested in ray-llm are comparing it to the libraries listed below
Sorting:
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,835Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,020Updated 2 months ago
- Serving multiple LoRA finetuned LLM as one☆1,066Updated last year
- LLMPerf is a library for validating and benchmarking LLMs☆940Updated 6 months ago
- Scale LLM Engine public repository☆803Updated last week
- ☆462Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,867Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,497Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆698Updated last year
- A tiny library for coding with large language models.☆1,232Updated 11 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated 8 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆946Updated 8 months ago
- A comprehensive guide to building RAG-based LLM applications for production.☆1,798Updated 10 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,549Updated 11 months ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,698Updated 11 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆724Updated last year
- Customizable implementation of the self-instruct paper.☆1,044Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,193Updated last month
- ☆543Updated 6 months ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,820Updated 3 months ago
- batched loras☆343Updated last year
- Chat language model that can use tools and interpret the results☆1,563Updated this week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,966Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,479Updated 10 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,882Updated last year
- A high-performance inference system for large language models, designed for production environments.☆447Updated this week
- ☆455Updated last year
- LOMO: LOw-Memory Optimization☆987Updated 11 months ago
- A language for constraint-guided and efficient LLM programming.☆3,968Updated last month