schinger / FullLLMLinks
Full stack LLM (Pre-training/finetuning, PPO(RLHF), Inference, Quant, etc.)
☆27Updated 6 months ago
Alternatives and similar repositories for FullLLM
Users that are interested in FullLLM are comparing it to the libraries listed below
Sorting:
- Reinforcement Learning in LLM and NLP.☆51Updated 2 weeks ago
- 在verl上做reward的定制开发☆107Updated 3 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆87Updated 5 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- The related works and background techniques about Openai o1☆224Updated 7 months ago
- Train your grpo with zero dataset and low resources, 8bit/4bit/lora/qlora supported, multi-gpu supported ...☆75Updated 4 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆246Updated 6 months ago
- 基于DPO算法微调语言大模型,简单好上手。☆43Updated last year
- ☆145Updated last year
- a-m-team's exploration in large language modeling☆186Updated 3 months ago
- llm & rl☆198Updated last week
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆65Updated 6 months ago
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆69Updated 6 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆134Updated last month
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- LeetCode Training and Evaluation Dataset☆30Updated 4 months ago
- 使用单个24G显卡,从0开始训练LLM☆55Updated last month
- ☆129Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆93Updated 11 months ago
- ☆146Updated last year
- Collection of papers for scalable automated alignment.☆93Updated 10 months ago
- A curated list of awesome works in Routing LLMs paradigm (👉 Welcome to submit your contributions to this code repository)☆52Updated last month
- ☆52Updated last week
- Fantastic Data Engineering for Large Language Models☆90Updated 8 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆298Updated this week
- ☆30Updated 5 months ago
- 通过动画学强化学习笔记☆57Updated 6 months ago
- ☆85Updated 6 months ago
- LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA☆226Updated 2 weeks ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆34Updated 3 months ago