ArtificialZeng / baichuan-speedupLinks
纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,
☆45Updated 2 years ago
Alternatives and similar repositories for baichuan-speedup
Users that are interested in baichuan-speedup are comparing it to the libraries listed below
Sorting:
- llama inference for tencentpretrain☆99Updated 2 years ago
- (1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。☆45Updated 2 years ago
- qwen models finetuning☆105Updated 8 months ago
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就能自动化生产NLP模型。☆93Updated 2 years ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- Another ChatGLM2 implementation for GPTQ quantization☆53Updated 2 years ago
- ☆90Updated 2 years ago
- 通用版面分析 | 中文文档解析 |Document Layout Analysis | layout paser☆47Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 8 months ago
- A more efficient GLM implementation!☆54Updated 2 years ago
- Baichuan2代码的逐行解析版本,适合小白☆214Updated 2 years ago
- large language model training-3-stages+deployment☆49Updated 2 years ago
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆165Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆138Updated 11 months ago
- baichuan and baichuan2 finetuning and alpaca finetuning☆33Updated 8 months ago
- 骆驼QA,中文大语言阅读理解模型。☆75Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目 动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- 想要从零开始训练一个中文的mini大语言模型,可以进行基本的对话,模型大小根据手头的机器决定☆63Updated last year
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆95Updated 2 years ago
- ChatGLM-6B fine-tuning.☆136Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated 2 years ago
- code for piccolo embedding model from SenseTime☆143Updated last year
- 实现了Baichuan-Chat微调,Lora、QLora等各种微调方式,一键运行。☆71Updated 2 years ago
- share data, prompt data , pretraining data☆36Updated last year