chu-tianxiang / vllm-gptq
A high-throughput and memory-efficient inference and serving engine for LLMs
☆131Updated 10 months ago
Alternatives and similar repositories for vllm-gptq:
Users that are interested in vllm-gptq are comparing it to the libraries listed below
- A high-throughput and memory-efficient inference and serving engine for LLMs☆135Updated 5 months ago
- Imitate OpenAI with Local Models☆88Updated 8 months ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆95Updated last year
- deep learning☆149Updated last month
- Mixture-of-Experts (MoE) Language Model☆186Updated 7 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆261Updated 11 months ago
- ☆105Updated last year
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆94Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- Implement OpenAI APIs and plugin-enabled ChatGPT with open source LLM and other models.☆120Updated 10 months ago
- zero零训练llm调参☆31Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆126Updated 3 months ago
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- Open Source Text Embedding Models with OpenAI Compatible API☆153Updated 9 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆172Updated last year
- llama inference for tencentpretrain☆98Updated last year
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated 9 months ago
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆164Updated last year
- ☆124Updated last year
- Light local website for displaying performances from different chat models.☆86Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆219Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Official repository for LongChat and LongEval☆519Updated 11 months ago
- 中文原生检索增强生成测评基准☆115Updated last year
- ☆82Updated 11 months ago