Paul33333 / SFT-and-DPOLinks
This is a detailed code demo on how to conduct Full-Param Supervised Fine-tuning (SFT) and DPO (Direct Preference Optimization)
☆17Updated 9 months ago
Alternatives and similar repositories for SFT-and-DPO
Users that are interested in SFT-and-DPO are comparing it to the libraries listed below
Sorting:
- 快速入门RAG与私有化部署☆209Updated last year
- DeepSpeed Tutorial☆102Updated last year
- LLM Tokenizer with BPE algorithm☆43Updated last year
- 一些 LLM 方面的从零复现笔记☆226Updated 5 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆125Updated 11 months ago
- 包含程序员面试大厂面试题和面试经验☆187Updated 5 months ago
- ☆119Updated last year
- TinyRAG☆353Updated 3 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆473Updated 5 months ago
- Qwen3 Fine-tuning: Medical R1 Style Chat☆211Updated 4 months ago
- kaggle 2024 Eedi 第10名 金牌方案☆42Updated 9 months ago
- ☆110Updated last year
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆191Updated 2 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆210Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆347Updated last year
- ☆113Updated 4 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆74Updated last year
- 阿里天池: 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答 baseline 80+☆114Updated last year
- personal chatgpt☆387Updated 10 months ago