A-baoYang / alpaca-7b-chineseLinks
Finetune LLaMA-7B with Chinese instruction datasets
☆136Updated 2 years ago
Alternatives and similar repositories for alpaca-7b-chinese
Users that are interested in alpaca-7b-chinese are comparing it to the libraries listed below
Sorting:
- Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集☆19Updated 2 years ago
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- Arrange methods and example on finetune LLMs☆75Updated 11 months ago
- ☆124Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆87Updated 2 years ago
- deep learning☆148Updated last month
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated last year
- alpaca中文指令微调数据集☆391Updated 2 years ago
- A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Huma…☆136Updated 2 years ago
- 🤖 集合众多大模型在Colab上的使用 | LLMs is all you need.☆129Updated last year
- 旨在对当前主流LLM进行一个直观、具体、 标准的评测☆94Updated 2 years ago
- ☆307Updated 2 years ago
- ☆162Updated 2 years ago
- moss chat finetuning☆50Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆173Updated last year
- 基于sentence transformers和chatglm实现的文档搜索工具☆156Updated 2 years ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- finetune llama2 with traditional chinese dataset☆38Updated last year
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆174Updated last year
- ☆172Updated 2 years ago
- 骆驼QA,中文大语言阅读理解模型。☆74Updated 2 years ago
- deepspeed+trainer简单高效实现多卡微调大模型☆126Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆115Updated last year
- Official github repo for TMMLU+, Large scale traditional chinese massive multitask language understanding☆45Updated 11 months ago
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated 11 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆217Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆83Updated 2 years ago