FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆217Updated last month
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆200Updated last week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆183Updated last month
- ☆157Updated 3 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆417Updated 2 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆204Updated this week
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆255Updated 3 weeks ago
- ☆198Updated 3 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆238Updated 5 months ago
- a-m-team's exploration in large language modeling☆178Updated 2 months ago
- AN O1 REPLICATION FOR CODING☆336Updated 7 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆395Updated this week
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆181Updated 2 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆192Updated 10 months ago
- ☆287Updated 2 months ago
- ☆173Updated last month
- ☆65Updated 8 months ago
- ☆800Updated last month
- ☆733Updated 2 months ago
- A Comprehensive Survey on Long Context Language Modeling☆169Updated 3 weeks ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆85Updated 4 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆335Updated 3 weeks ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆146Updated 7 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆188Updated 4 months ago
- LLaMA Factory Document☆146Updated 2 weeks ago
- ☆298Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆146Updated last year
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆136Updated last month
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆268Updated 5 months ago
- ☆300Updated last month