ssbuild / llm_finetuningLinks
Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on
☆97Updated last year
Alternatives and similar repositories for llm_finetuning
Users that are interested in llm_finetuning are comparing it to the libraries listed below
Sorting:
- deepspeed+trainer简单高效实现多卡微调大模型☆125Updated 2 years ago
- make LLM easier to use☆59Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- deep learning☆148Updated 3 weeks ago
- 怎么训练一个LLM分词器☆148Updated last year
- llama2 finetuning with deepspeed and lora☆174Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆101Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- 中文大语言模型评测第一期☆109Updated last year
- A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Huma…☆135Updated 2 years ago
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆171Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆173Updated last year
- moss chat finetuning☆50Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- ☆97Updated last year
- ☆162Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- ☆69Updated last year
- 用于微调LLM的中文指令数据集☆26Updated 2 years ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领 域大模型。☆118Updated 2 months ago
- llama,chatglm 等模型的微调☆89Updated 10 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆206Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆128Updated 11 months ago
- 基于ChatGLM2-6B进行微调,包括全参数、参数有效性、量化感知训练等,可实现指令微调、多轮对话微调等。☆25Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆79Updated 6 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- ☆172Updated 2 years ago
- ☆308Updated 2 years ago