microsoft / RedStoneLinks
The RedStone repository includes code for preparing extensive datasets used in training large language models.
☆146Updated 2 weeks ago
Alternatives and similar repositories for RedStone
Users that are interested in RedStone are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆265Updated 6 months ago
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆238Updated 8 months ago
- ☆93Updated 8 months ago
- ☆322Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- ☆320Updated last year
- ☆180Updated 9 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- a toolkit on knowledge distillation for large language models☆266Updated this week
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆255Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Updated last year
- ☆117Updated 8 months ago
- ☆84Updated last year
- Reformatted Alignment☆111Updated last year
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆109Updated 10 months ago
- ☆125Updated last year
- ☆96Updated last year
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆249Updated 9 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆48Updated last year
- ☆520Updated last month
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆240Updated 2 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆370Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Updated last year
- ☆147Updated last year
- Async pipelined version of Verl☆124Updated 9 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆203Updated 2 months ago
- ☆68Updated last year