mistralai / vllm-release
A high-throughput and memory-efficient inference and serving engine for LLMs
☆52Updated last year
Alternatives and similar repositories for vllm-release:
Users that are interested in vllm-release are comparing it to the libraries listed below
- ☆66Updated 10 months ago
- Chat Markup Language conversation library☆55Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆103Updated 4 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- ☆112Updated 3 months ago
- ☆48Updated last year
- Data preparation code for Amber 7B LLM☆88Updated 11 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆234Updated 10 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆95Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 7 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆167Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆147Updated 7 months ago
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Ongoing research training transformer models at scale☆36Updated last year
- The Next Generation Multi-Modality Superintelligence☆71Updated 7 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 3 months ago
- The implementation of "Leeroo Orchestrator: Elevating LLMs Performance Through Model Integration"☆56Updated 11 months ago
- Pre-training code for CrystalCoder 7B LLM☆54Updated 11 months ago
- GRDN.AI app for garden optimization☆70Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 2 months ago
- ☆54Updated last year
- ☆73Updated last year
- ☆60Updated last year
- ☆38Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆86Updated 3 weeks ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆58Updated last week
- 1.58-bit LLaMa model☆81Updated last year