microsoft / RedStoneLinks
The RedStone repository includes code for preparing extensive datasets used in training large language models.
☆146Updated 6 months ago
Alternatives and similar repositories for RedStone
Users that are interested in RedStone are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 6 months ago
- ☆93Updated 7 months ago
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- ☆320Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆192Updated last year
- Reformatted Alignment☆111Updated last year
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆239Updated 7 months ago
- ☆178Updated 8 months ago
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆109Updated 9 months ago
- ☆318Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆95Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆151Updated last year
- ☆96Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆59Updated last year
- ☆147Updated last year
- a toolkit on knowledge distillation for large language models☆232Updated 2 weeks ago
- ☆104Updated last year
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆299Updated 2 months ago
- ☆117Updated 7 months ago
- ☆50Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 7 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆61Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆135Updated 6 months ago
- ☆83Updated last year
- ☆82Updated 7 months ago