XU-YIJIE / hobo-llm-from-scratchLinks
From Llama to Deepseek, grpo/mtp implemented. With pt/sft/lora/qlora included
☆30Updated 8 months ago
Alternatives and similar repositories for hobo-llm-from-scratch
Users that are interested in hobo-llm-from-scratch are comparing it to the libraries listed below
Sorting:
- Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...☆79Updated 7 months ago
- ☆115Updated last year
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- ☆125Updated last year
- ☆65Updated last year
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆76Updated 10 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆93Updated last month
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- ☆146Updated last year
- Counting-Stars (★)☆83Updated 3 weeks ago
- 使用单个24G显卡,从0开始训练LLM☆55Updated 5 months ago
- Full stack LLM (Pre-training/finetuning, PPO(RLHF), Inference, Quant, etc.)☆30Updated 10 months ago
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆66Updated 10 months ago
- realize the reinforcement learning training for gpt2 llama bloom and so on llm model☆26Updated 2 years ago
- ☆96Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆274Updated 10 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 4 months ago
- ☆84Updated 2 years ago
- ☆40Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆269Updated last year
- a-m-team's exploration in large language modeling☆195Updated 6 months ago
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆86Updated last year
- ☆39Updated 9 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆264Updated 5 months ago
- ☆105Updated last year
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago