linjh1118 / Llama3-Chinese-ORPOLinks
基于Llama3,通过进一步CPT,SFT,ORPO得到的中文版Llama3
☆17Updated last year
Alternatives and similar repositories for Llama3-Chinese-ORPO
Users that are interested in Llama3-Chinese-ORPO are comparing it to the libraries listed below
Sorting:
- pre-training llama3 using chinese☆13Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- In this fast-paced world, we all need a little something to spice up life. Whether you need a glass of sweet talk to lift your spirits or…☆60Updated 7 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 11 months ago
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- ☆16Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆33Updated last year
- A fluent, scalable, and easy-to-use LLM data processing framework.☆26Updated this week
- Collection of model-centric MCP servers☆24Updated 7 months ago
- ☆28Updated last year
- ☆28Updated last year
- (撰写ing..)本仓库偏教程性质,以「模型中文化」为一个典型的模型训练问题切入场景,指导读者上手学习LLM二次微调训练。☆36Updated last year
- ☆95Updated last year
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆56Updated last week
- 🤗 HF Downloader (Hugging Face Downloader) 📦 A user-friendly GUI tool for downloading Hugging Face resources with enhanced connectivity…☆13Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆69Updated last year
- 一起来养一只拥有专属记忆的AI猫猫吧!☆10Updated last year
- ☆46Updated 8 months ago
- 本项目用于大模型数学解题能力方面的数据 集合成,模型训练及评测,相关文章记录。☆98Updated last year
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆76Updated 2 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆59Updated last year
- GLM Series Edge Models☆156Updated 6 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago
- 大语言模型训练和 服务调研☆37Updated 2 years ago
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆116Updated last year
- This repository provides an implementation of "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction B…☆85Updated 6 months ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆136Updated last year
- 通义千问的DPO训练☆61Updated last year