FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆234Updated last month
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆218Updated 3 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆191Updated 3 weeks ago
- ☆203Updated 6 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆369Updated this week
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆226Updated 5 months ago
- ☆169Updated 5 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- LLaMA Factory Document☆152Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆448Updated 5 months ago
- a-m-team's exploration in large language modeling☆189Updated 4 months ago
- ☆298Updated 4 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆433Updated this week
- A Comprehensive Survey on Long Context Language Modeling☆193Updated 3 months ago
- AN O1 REPLICATION FOR CODING☆336Updated 10 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆260Updated 8 months ago
- ☆748Updated last month
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆236Updated 2 months ago
- ☆817Updated 4 months ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆97Updated this week
- a toolkit on knowledge distillation for large language models☆181Updated last week
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 10 months ago
- [ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction☆554Updated 5 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆142Updated 3 months ago
- Awesome LLM pre-training resources, including data, frameworks, and methods.☆269Updated 5 months ago
- 青稞Talk☆151Updated last week
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆263Updated last month
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆251Updated 11 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆200Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux Series - ReasonFlux, ReasonFlux-PRM and ReasonFlux-Coder☆492Updated last month