taishan1994 / Llama3.1-Finetuning
对llama3进行全参微调、lora微调以及qlora微调。
☆146Updated last month
Related projects ⓘ
Alternatives and complementary repositories for Llama3.1-Finetuning
- 本项目是针对RAG中的Retrieve阶段的召回技术及算法效果所做评估实验。使用主体框架为LlamaIndex.☆171Updated 2 months ago
- 一些 LLM 方面的从零复现笔记☆134Updated last month
- ☆87Updated 4 months ago
- 该仓库主要记录 LLMs 算法工程师相关的顶会论文研读笔记(多模态、PEFT、小样本QA问答、RAG、LMMs可解释性、Agents、CoT)☆259Updated 7 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆353Updated 6 months ago
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆49Updated 4 months ago
- 使用单个24G显卡,从0开始训练LLM☆49Updated 2 weeks ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆266Updated 3 months ago
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆78Updated last year
- LAiW: A Chinese Legal Large Language Models Benchmark☆71Updated 4 months ago
- [ACL 2024] IEPile: A Large-Scale Information Extraction Corpus☆168Updated this week
- 基于BM25、BGE的检索增强生成RAG示例☆96Updated last week
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆305Updated 2 months ago
- 快速入门RAG与私有化部署☆128Updated 6 months ago
- llama2 finetuning with deepspeed and lora☆166Updated last year
- 开源SFT数据集整理,随时补充☆440Updated last year
- 从0开始,将chatgpt的技术路线跑一遍。☆144Updated 2 months ago
- 怎么训练一个LLM分词器☆129Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆166Updated 5 months ago
- Alpaca Chinese Dataset -- 中文指令微调数据集【人工+GPT4o持续更新】☆184Updated last month
- ☆191Updated this week
- 雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)☆269Updated 3 months ago
- personal chatgpt☆315Updated last week
- Baichuan2代码的逐行解析版本,适合小白☆209Updated last year
- Train a Chinese LLM From 0 by Personal☆197Updated last week
- ☆213Updated 5 months ago
- A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.☆311Updated 10 months ago
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆68Updated 5 months ago
- ☆118Updated 6 months ago
- 一个很小很小的RAG系统☆62Updated 2 months ago