Yard1 / Ray-DeepSpeed-Inference
☆18Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Ray-DeepSpeed-Inference
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆124Updated 3 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆73Updated 9 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆79Updated 8 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆53Updated 7 months ago
- Experiments on speculative sampling with Llama models☆117Updated last year
- Unofficial implementation of AlpaGasus☆84Updated last year
- ☆33Updated 6 months ago
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆105Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆129Updated 4 months ago
- ☆199Updated this week
- Retrieves parquet files from Hugging Face, identifies and quantifies junky data, duplication, contamination, and biased content in datase…☆50Updated last year
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆96Updated 8 months ago
- ☆156Updated last month
- Benchmark suite for LLMs from Fireworks.ai☆58Updated this week
- Evaluation for AI apps and agent☆35Updated 9 months ago
- A pipeline for LLM knowledge distillation☆77Updated 3 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆175Updated last month
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆200Updated 5 months ago
- ☆283Updated last month
- Expert Specialized Fine-Tuning☆144Updated last month
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆358Updated 4 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 7 months ago
- ☆190Updated this week
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆217Updated 2 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆278Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆127Updated 5 months ago
- Data preparation code for Amber 7B LLM☆82Updated 6 months ago
- Reformatted Alignment☆112Updated last month
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆76Updated last month
- ☆73Updated 10 months ago