microsoft / RedStoneLinks
The RedStone repository includes code for preparing extensive datasets used in training large language models.
☆142Updated 3 months ago
Alternatives and similar repositories for RedStone
Users that are interested in RedStone are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- ☆89Updated 5 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Reformatted Alignment☆112Updated last year
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆101Updated 6 months ago
- ☆169Updated 5 months ago
- ☆319Updated last year
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆215Updated 5 months ago
- ☆296Updated 4 months ago
- ☆312Updated last year
- ☆96Updated 10 months ago
- ☆105Updated 4 months ago
- ☆95Updated 10 months ago
- [ICLR 2025] The official implementation of paper "ToolGen: Unified Tool Retrieval and Calling via Generation"☆160Updated 6 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆60Updated last year
- ☆104Updated 10 months ago
- ☆83Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆151Updated 2 weeks ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆114Updated 5 months ago
- Token level visualization tools for large language models☆89Updated 9 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆247Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 10 months ago
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆271Updated last week
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- ☆147Updated last year
- MiroThinker is open-source agentic models trained for deep research and complex tool use scenarios.☆438Updated 2 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.☆200Updated last year