Ledzy / BAdamLinks
[NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models
☆272Updated 7 months ago
Alternatives and similar repositories for BAdam
Users that are interested in BAdam are comparing it to the libraries listed below
Sorting:
- A recipe for online RLHF and online iterative DPO.☆533Updated 9 months ago
- ☆243Updated 5 months ago
- MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models☆447Updated last year
- adds Sequence Parallelism into LLaMA-Factory☆578Updated last week
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆258Updated 6 months ago
- A scalable, end-to-end training pipeline for general-purpose agents☆360Updated 3 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆582Updated 9 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆257Updated last month
- ☆203Updated 6 months ago
- Codebase for Iterative DPO Using Rule-based Rewards☆260Updated 6 months ago
- ☆320Updated last month
- Recipes to train reward model for RLHF.☆1,470Updated 6 months ago
- Controllable Text Generation for Large Language Models: A Survey☆192Updated last year
- a-m-team's exploration in large language modeling☆189Updated 4 months ago
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆175Updated 4 months ago
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆239Updated 2 months ago
- minimal-cost for training 0.5B R1-Zero☆778Updated 5 months ago
- The framework to prune LLMs to any size and any config.☆94Updated last year
- Recipes to train the self-rewarding reasoning LLMs.☆226Updated 7 months ago
- ☆213Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆268Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- improve Llama-2's proficiency in comprehension, generation, and translation of Chinese.☆445Updated last year
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆369Updated this week
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆169Updated 10 months ago
- ☆116Updated last year
- ☆115Updated 11 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆189Updated 7 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- Counting-Stars (★)☆83Updated 4 months ago