heyblackC / BetterMixture-Top1-SolutionLinks
天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案
☆32Updated last year
Alternatives and similar repositories for BetterMixture-Top1-Solution
Users that are interested in BetterMixture-Top1-Solution are comparing it to the libraries listed below
Sorting:
- 通用简单工具项目☆20Updated last year
- 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答, 75+ baseline☆60Updated last year
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆71Updated 8 months ago
- ☆119Updated last year
- 基于DPO算法微调语言大模型,简单好上手。☆45Updated last year
- 使用 Qwen2ForSequenceClassification 简单实现文本分类任务。☆83Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆95Updated last year
- 使用单个24G显卡,从0开始训练LLM☆56Updated 3 months ago
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- ☆13Updated 7 months ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆68Updated last year
- 阿里天池: 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答 baseline 80+☆114Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆89Updated 11 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated 3 months ago
- ☆115Updated 11 months ago
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆88Updated 2 years ago
- LLM+RAG for QA☆23Updated last year
- ☆145Updated last year
- kaggle 2024 Eedi 第10名 金牌方案☆43Updated 10 months ago
- 用于AIOPS24挑战赛的Demo☆64Updated last year
- 大型语言模型实战指南:应用实践与场景落地☆80Updated last year
- ☆70Updated 3 months ago
- qwen models finetuning☆105Updated 7 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆34Updated 4 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆65Updated 8 months ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- deepspeed+trainer简单高效实现多卡微调大模型☆129Updated 2 years ago