Yard1 / Ray-DeepSpeed-InferenceLinks
☆17Updated 2 years ago
Alternatives and similar repositories for Ray-DeepSpeed-Inference
Users that are interested in Ray-DeepSpeed-Inference are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- Official repository for LongChat and LongEval☆534Updated last year
- Comparison of Language Model Inference Engines☆239Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- LLM Inference benchmark☆433Updated last year
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆96Updated last year
- batched loras☆349Updated 2 years ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆215Updated 4 months ago
- Benchmark suite for LLMs from Fireworks.ai☆89Updated this week
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆255Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- Easy and Efficient Quantization for Transformers☆204Updated last week
- Open Source WizardCoder Dataset☆163Updated 2 years ago
- ☆206Updated 9 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated 2 years ago
- ☆29Updated last year
- A high-performance inference system for large language models, designed for production environments.☆491Updated last month
- ☆56Updated last year
- Light local website for displaying performances from different chat models.☆87Updated 2 years ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆370Updated last year
- ☆125Updated last year
- Efficient AI Inference & Serving☆479Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆318Updated 2 years ago
- ☆22Updated 2 years ago
- vLLM Router☆54Updated last year
- Imitate OpenAI with Local Models☆90Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆140Updated last year
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year