ray-project / ray-llm
RayLLM - LLMs on Ray
☆1,262Updated 9 months ago
Alternatives and similar repositories for ray-llm:
Users that are interested in ray-llm are comparing it to the libraries listed below
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,982Updated 2 weeks ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,798Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,036Updated 10 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆809Updated 3 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,391Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,210Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,455Updated 8 months ago
- Scale LLM Engine public repository☆792Updated this week
- Customizable implementation of the self-instruct paper.☆1,039Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆690Updated 11 months ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,920Updated last month
- A tiny library for coding with large language models.☆1,224Updated 8 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,850Updated last year
- ☆541Updated 2 months ago
- Official repository for LongChat and LongEval☆516Updated 9 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,005Updated last week
- ☆447Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,450Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,443Updated 10 months ago
- Examples on how to use LangChain and Ray☆226Updated last year
- ☆511Updated 6 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆920Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆1,675Updated this week
- batched loras☆338Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,053Updated 11 months ago
- ☆447Updated last year
- Efficient Retrieval Augmentation and Generation Framework☆1,480Updated 2 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆680Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,460Updated 7 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,681Updated 2 months ago