git-cloner / llama2-lora-fine-tuning
llama2 finetuning with deepspeed and lora
☆171Updated last year
Alternatives and similar repositories for llama2-lora-fine-tuning:
Users that are interested in llama2-lora-fine-tuning are comparing it to the libraries listed below
- 怎么训练一个LLM分词器☆137Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆331Updated 4 months ago
- deepspeed+trainer简单高效实现多卡微调大模型☆121Updated last year
- llama fine-tuning with lora☆140Updated 8 months ago
- ☆128Updated 9 months ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆284Updated 5 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆168Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆205Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆96Updated 8 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆109Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆163Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆236Updated last year
- 使用sentencepiece中BPE训练中文词表,并在transformers中进行使用。☆113Updated last year
- ☆159Updated last year
- ☆161Updated last year
- llama,chatglm 等模型的微调☆85Updated 6 months ago
- Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"☆204Updated 11 months ago
- 简单易懂的LLaMA微调指南。☆385Updated last year
- 使用单个24G显卡,从0开始训练LLM☆50Updated 2 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆403Updated last year
- ☆93Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆100Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆73Updated 2 months ago
- ☆276Updated 8 months ago
- ☆62Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆183Updated 8 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆354Updated 5 months ago
- A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Huma…☆133Updated last year
- Naive Bayes-based Context Extension☆320Updated last month
- ☆64Updated last year