YunwenTechnology / Chinese-Data-Distill-From-R1Links
中文基于满血DeepSeek-R1蒸馏数据集
☆62Updated 8 months ago
Alternatives and similar repositories for Chinese-Data-Distill-From-R1
Users that are interested in Chinese-Data-Distill-From-R1 are comparing it to the libraries listed below
Sorting:
- ☆234Updated last year
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆217Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆97Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 8 months ago
- ☆147Updated last year
- 文本去重☆76Updated last year
- a toolkit on knowledge distillation for large language models☆195Updated last week
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆84Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆68Updated last year
- 中文原生检索增强生成测评基准☆123Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago
- code for piccolo embedding model from SenseTime☆143Updated last year
- ☆166Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆220Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆418Updated 2 weeks ago
- ☆312Updated 2 years ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆212Updated last year
- ☆164Updated 2 years ago
- ☆115Updated last year
- [ACL 2024] IEPile: A Large-Scale Information Extraction Corpus☆207Updated 10 months ago
- 中文 Instruction tuning datasets☆140Updated last year
- qwen models finetuning☆105Updated 8 months ago
- 活字通用大模型☆391Updated last year
- LLaMA Factory Document☆151Updated last week
- ☆180Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆265Updated last year
- ☆330Updated last year