git-cloner / llama2-lora-fine-tuningLinks
llama2 finetuning with deepspeed and lora
☆174Updated last year
Alternatives and similar repositories for llama2-lora-fine-tuning
Users that are interested in llama2-lora-fine-tuning are comparing it to the libraries listed below
Sorting:
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- ☆280Updated last year
- llama fine-tuning with lora☆139Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated last year
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆84Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆368Updated 8 months ago
- 怎么训练一个LLM分词器☆149Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆125Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆173Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆410Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆206Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆201Updated last year
- ☆162Updated 2 years ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆251Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆217Updated last year
- Paper List for In-context Learning 🌷☆183Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆261Updated last year
- Naive Bayes-based Context Extension☆325Updated 5 months ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆118Updated 2 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆198Updated 7 months ago
- ☆97Updated last year
- 使用LoRA对Chinese-LLaMA-Alpaca进行微调。☆34Updated 2 years ago
- 使用单个24G显卡,从0开始训练LLM☆54Updated last week
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆325Updated 10 months ago
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆194Updated 2 years ago
- ☆63Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆128Updated 11 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆220Updated last year