mistralai / vllm-release
A high-throughput and memory-efficient inference and serving engine for LLMs
☆52Updated last year
Alternatives and similar repositories for vllm-release
Users that are interested in vllm-release are comparing it to the libraries listed below
Sorting:
- ☆22Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 9 months ago
- ☆38Updated last year
- ☆66Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆208Updated 9 months ago
- ☆73Updated last year
- LLM finetuning☆42Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆148Updated 8 months ago
- ☆30Updated 10 months ago
- ☆113Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆52Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆48Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 5 months ago
- The implementation of "Leeroo Orchestrator: Elevating LLMs Performance Through Model Integration"☆55Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- ☆199Updated last year
- ☆84Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 3 months ago
- Data preparation code for Amber 7B LLM☆89Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated 9 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated 11 months ago
- Ongoing research training transformer models at scale☆37Updated last year
- Function Calling Benchmark & Testing☆87Updated 10 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 6 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year