ssbuild / t5_finetuning
clue chatyuan finetuning
☆17Updated 3 weeks ago
Alternatives and similar repositories for t5_finetuning:
Users that are interested in t5_finetuning are comparing it to the libraries listed below
- ChatGLM-6B fine-tuning.☆135Updated last year
- chatglm_rlhf_finetuning☆28Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- llama inference for tencentpretrain☆98Updated last year
- moss chat finetuning☆50Updated 11 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- 骆驼QA,中文大语言阅读理解模型。☆75Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆106Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆85Updated last year
- ☆102Updated 4 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated last year
- baichuan LLM surpervised finetune by lora☆63Updated last year
- 时间抽取、解析、标准化工具☆51Updated 2 years ago
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated 2 years ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated last year
- 一个基于预训练的句向量生成工具☆136Updated 2 years ago
- llama信息抽取实战☆98Updated last year
- 基于sentence-transformers实现文本转向量的机器人☆45Updated 2 years ago
- LORA微调BLOOMZ,参考BELLE☆25Updated 2 years ago
- ☆23Updated last year
- 中文版unilm预训练模型☆83Updated 4 years ago
- use chatGLM to perform text embedding☆45Updated last year
- 打造人人都会的NLP,开源不易,记得star哦☆101Updated last year
- deep training task☆29Updated last year
- 大语言模型训练和服务调研☆37Updated last year
- 零样本学习测评基准,中文版☆56Updated 3 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆173Updated last year
- 支持ChatGLM2 lora微调☆39Updated last year