heyblackC / BetterMixture-Top1-SolutionLinks
天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案
☆32Updated last year
Alternatives and similar repositories for BetterMixture-Top1-Solution
Users that are interested in BetterMixture-Top1-Solution are comparing it to the libraries listed below
Sorting:
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混 合专家模型、微调训练、强化学习、天池数据竞赛☆68Updated 7 months ago
- 通义千问的DPO训练☆55Updated 11 months ago
- 通用简单工具项目☆20Updated 11 months ago
- 使用 Qwen2ForSequenceClassification 简单实现文本分类任务。☆82Updated last year
- 使用单个24G显卡,从0开始训练LLM☆55Updated 2 months ago
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答, 75+ baseline☆60Updated last year
- ☆117Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆67Updated last year
- ☆13Updated 6 months ago
- 基于DPO算法微调语言大模型,简单好上手。☆45Updated last year
- 2024CCF国际AIOps挑战赛-赛道二(GLM4):基于检索增强的运维知识问答挑战赛解决方案分享。☆11Updated last year
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆88Updated 2 years ago
- LLM+RAG for QA☆23Updated last year
- ☆62Updated last month
- 阿里天池: 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索 问答 baseline 80+☆111Updated last year
- 用于AIOPS24挑战赛的Demo☆64Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆88Updated 10 months ago
- ☆114Updated 10 months ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆95Updated last year
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated last month
- 对llama3进行全参微调、lora微调以及qlora微调。☆208Updated 11 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆34Updated 3 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- kaggle 2024 Eedi 第10名 金牌方案☆41Updated 8 months ago
- 在verl上做reward的定制开发☆114Updated 3 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆65Updated 7 months ago