chu-tianxiang / vllm-gptq
A high-throughput and memory-efficient inference and serving engine for LLMs
☆130Updated 6 months ago
Alternatives and similar repositories for vllm-gptq:
Users that are interested in vllm-gptq are comparing it to the libraries listed below
- A high-throughput and memory-efficient inference and serving engine for LLMs☆127Updated last month
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆259Updated 8 months ago
- Imitate OpenAI with Local Models☆85Updated 4 months ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆96Updated 10 months ago
- Official repository for LongChat and LongEval☆518Updated 7 months ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆95Updated last year
- Open Source Text Embedding Models with OpenAI Compatible API☆139Updated 6 months ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆178Updated 7 months ago
- Mixture-of-Experts (MoE) Language Model☆184Updated 4 months ago
- Light local website for displaying performances from different chat models.☆85Updated last year
- ☆206Updated 8 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆212Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆163Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆125Updated 5 months ago
- ☆122Updated last year
- ☆173Updated last year
- deep learning☆150Updated 6 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆71Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆119Updated last week
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- ☆305Updated 6 months ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- ☆105Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆168Updated last year
- ☆159Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆79Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 10 months ago
- ☆161Updated last year
- ☆267Updated last year