crazycth / WizardLearner
Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️
☆36Updated 9 months ago
Alternatives and similar repositories for WizardLearner:
Users that are interested in WizardLearner are comparing it to the libraries listed below
- ☆59Updated 3 months ago
- ☆96Updated 11 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆62Updated 2 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 10 months ago
- 使用单个24G显卡,从0开始训练LLM☆50Updated 4 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 9 months ago
- ☆44Updated 9 months ago
- ☆141Updated 8 months ago
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆33Updated 7 months ago
- 中文大语言模型评测第三期☆24Updated 9 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆115Updated 4 months ago
- ☆81Updated 10 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆161Updated this week
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆33Updated 2 months ago
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆151Updated 7 months ago
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆63Updated 3 weeks ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆29Updated 9 months ago
- 怎么训练一个LLM分词器☆142Updated last year
- The related works and background techniques about Openai o1☆216Updated 2 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆62Updated 5 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆164Updated 3 weeks ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆79Updated 6 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- ☆80Updated last year
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆60Updated 5 months ago