Tlntin / ChatGLM2-6B-TensorRTLinks
☆90Updated 2 years ago
Alternatives and similar repositories for ChatGLM2-6B-TensorRT
Users that are interested in ChatGLM2-6B-TensorRT are comparing it to the libraries listed below
Sorting:
- run ChatGLM2-6B in BM1684X☆50Updated last year
- export llama to onnx☆136Updated 10 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated 2 years ago
- Another ChatGLM2 implementation for GPTQ quantization☆53Updated 2 years ago
- Compare multiple optimization methods on triton to imporve model service performance☆52Updated last year
- ☆27Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆120Updated 2 years ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- llama inference for tencentpretrain☆99Updated 2 years ago
- llm-export can export llm model to onnx.☆324Updated 3 weeks ago
- 高性能文本 Tokenizer 库☆31Updated last year
- A more efficient GLM implementation!☆54Updated 2 years ago
- ☆51Updated 2 years ago
- simplify >2GB large onnx model☆65Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆84Updated 2 years ago
- ☆72Updated 2 years ago
- qwen models finetuning☆105Updated 8 months ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Updated 2 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Updated 2 years ago
- baichuan and baichuan2 finetuning and alpaca finetuning☆33Updated 8 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆138Updated 11 months ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 8 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- 📔 对Chinese-LLaMA-Alpaca进行使用说明和核心代码注解☆50Updated 2 years ago
- ☆26Updated 2 years ago
- chatglm多gpu用deepspeed和☆412Updated last year
- ☆51Updated last year
- ☆313Updated 2 years ago