XU-YIJIE / hobo-llm-from-scratchLinks
From Llama to Deepseek, grpo/mtp implemented. With pt/sft/lora/qlora included
☆26Updated 2 months ago
Alternatives and similar repositories for hobo-llm-from-scratch
Users that are interested in hobo-llm-from-scratch are comparing it to the libraries listed below
Sorting:
- Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...☆74Updated 2 months ago
- ☆111Updated 8 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆66Updated 4 months ago
- ☆64Updated 7 months ago
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆14Updated 3 weeks ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆136Updated this week
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆85Updated 3 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆388Updated this week
- ☆83Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆228Updated 4 months ago
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- 使用单个24G显卡,从0开始训练LLM☆56Updated last week
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆64Updated 4 months ago
- ☆124Updated last year
- 基于DPO算法微调语言大模型,简单好上手。☆40Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆265Updated 11 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Counting-Stars (★)☆83Updated last month
- ☆106Updated last year
- a-m-team's exploration in large language modeling☆173Updated last month
- slime is a LLM post-training framework aiming for RL Scaling.☆596Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆195Updated last week
- The related works and background techniques about Openai o1☆223Updated 6 months ago
- ☆95Updated 7 months ago
- ☆140Updated last week
- ☆20Updated 3 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 7 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆244Updated 8 months ago
- llm & rl☆158Updated this week
- Max的有趣数据集 / Max's awesome datasets☆31Updated 2 months ago