FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆222Updated this week
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆208Updated last month
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆259Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆432Updated 3 months ago
- ☆199Updated 4 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆188Updated 2 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆187Updated 3 months ago
- ☆161Updated 3 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆258Updated last month
- A flexible and efficient training framework for large-scale alignment tasks☆415Updated this week
- LLaMA Factory Document☆148Updated last week
- A Comprehensive Survey on Long Context Language Modeling☆176Updated last month
- a-m-team's exploration in large language modeling☆184Updated 2 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆245Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆253Updated 8 months ago
- ☆292Updated 2 months ago
- a toolkit on knowledge distillation for large language models☆141Updated last week
- AN O1 REPLICATION FOR CODING☆335Updated 8 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 11 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆246Updated 9 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆219Updated 2 weeks ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- ☆811Updated 2 months ago
- ☆737Updated 2 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 4 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆198Updated 11 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- R1-searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning☆623Updated 3 weeks ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆435Updated 10 months ago
- ReasonFlux Series - A family of LLM post-training algorithms focusing on data selection, reinforcement learning, and inference scaling☆481Updated 3 weeks ago
- ☆305Updated last year