FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆204Updated last week
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆176Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆190Updated 2 weeks ago
- ☆190Updated 2 months ago
- ☆152Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated 2 weeks ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 6 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆384Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆395Updated last month
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆137Updated 11 months ago
- ☆269Updated 3 weeks ago
- R1-searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning☆561Updated 3 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.☆186Updated 9 months ago
- A Comprehensive Survey on Long Context Language Modeling☆151Updated 2 weeks ago
- a-m-team's exploration in large language modeling☆160Updated 3 weeks ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆220Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆133Updated last year
- AN O1 REPLICATION FOR CODING☆336Updated 6 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆169Updated last month
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆303Updated last month
- Mixture-of-Experts (MoE) Language Model☆189Updated 9 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 8 months ago
- ☆202Updated 4 months ago
- ☆789Updated last week
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆213Updated 4 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆122Updated this week
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆103Updated last month
- ☆288Updated 10 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆186Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆302Updated this week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆240Updated 2 months ago