A-baoYang / alpaca-7b-chinese
Finetune LLaMA-7B with Chinese instruction datasets
☆137Updated last year
Alternatives and similar repositories for alpaca-7b-chinese:
Users that are interested in alpaca-7b-chinese are comparing it to the libraries listed below
- Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集☆19Updated last year
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆136Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆85Updated last year
- ☆121Updated last year
- deep learning☆150Updated 7 months ago
- Arrange methods and example on finetune LLMs☆70Updated 7 months ago
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated 6 months ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆85Updated last year
- moss chat finetuning☆50Updated 9 months ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆96Updated 9 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆105Updated last year
- 基于sentence transformers和chatglm实现的文档搜索工具☆154Updated last year
- A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Huma…☆134Updated last year
- Collection of ChatGPT alternatives & LLM tuning methods☆12Updated last year
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆166Updated last year
- finetune llama2 with traditional chinese dataset☆38Updated last year
- Official github repo for TMMLU+, Large scale traditional chinese massive multitask language understanding☆45Updated 6 months ago
- ☆173Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated last year
- alpaca中文指令微调数据集☆392Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆169Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆64Updated last year
- ☆159Updated last year
- ☆304Updated last year
- Light local website for displaying performances from different chat models.☆85Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆421Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 7 months ago
- llama inference for tencentpretrain☆97Updated last year