ray-project / ray-llmLinks
RayLLM - LLMs on Ray (Archived). Read README for more info.
☆1,260Updated 4 months ago
Alternatives and similar repositories for ray-llm
Users that are interested in ray-llm are comparing it to the libraries listed below
Sorting:
- LLMPerf is a library for validating and benchmarking LLMs☆970Updated 7 months ago
- Scale LLM Engine public repository☆808Updated 2 weeks ago
- Serving multiple LoRA finetuned LLM as one☆1,078Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,844Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,042Updated last month
- ☆464Updated last year
- A tiny library for coding with large language models.☆1,235Updated last year
- A tool for evaluating LLMs☆423Updated last year
- Distribute and run AI workloads magically in Python, like PyTorch for ML infra.☆1,041Updated 2 weeks ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,721Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,872Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆702Updated last year
- Examples on how to use LangChain and Ray☆230Updated 2 years ago
- Chat language model that can use tools and interpret the results☆1,574Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,168Updated 9 months ago
- A high-performance inference system for large language models, designed for production environments.☆457Updated last week
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆952Updated 9 months ago
- ☆461Updated last year
- Customizable implementation of the self-instruct paper.☆1,048Updated last year
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,858Updated 5 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆727Updated last year
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,477Updated 3 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,891Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,583Updated last year
- Efficient Retrieval Augmentation and Generation Framework☆1,599Updated 6 months ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆591Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,262Updated 4 months ago
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆613Updated 2 months ago
- A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine☆849Updated this week