XU-YIJIE / hobo-llm-from-scratchLinks
From Llama to Deepseek, grpo/mtp implemented. With pt/sft/lora/qlora included
☆30Updated 7 months ago
Alternatives and similar repositories for hobo-llm-from-scratch
Users that are interested in hobo-llm-from-scratch are comparing it to the libraries listed below
Sorting:
- Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...☆79Updated 7 months ago
- 使用单个24G显卡,从0开始训练LLM☆55Updated 5 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆75Updated 9 months ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- ☆115Updated last year
- ☆65Updated last year
- ☆84Updated 2 years ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆93Updated last month
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆241Updated last week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆272Updated 9 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆159Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- ☆125Updated last year
- pytorch分布式训练☆72Updated 2 years ago
- ☆40Updated last year
- LLaMA Factory Document☆159Updated last week
- ☆105Updated last year
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆36Updated 6 months ago
- Counting-Stars (★)☆83Updated 2 weeks ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆221Updated 4 months ago
- ☆146Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆269Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- A flexible and efficient training framework for large-scale alignment tasks☆442Updated last month
- ☆33Updated 6 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- The related works and background techniques about Openai o1☆221Updated 11 months ago
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆46Updated last year
- a-m-team's exploration in large language modeling☆194Updated 6 months ago
- ☆147Updated last year