FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆210Updated 2 weeks ago
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆195Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆410Updated last month
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆175Updated last month
- ☆193Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆178Updated 3 weeks ago
- a-m-team's exploration in large language modeling☆171Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated this week
- A flexible and efficient training framework for large-scale alignment tasks☆385Updated this week
- ☆154Updated 2 months ago
- LLaMA Factory Document☆140Updated last month
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆228Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆146Updated 11 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆189Updated 9 months ago
- A Comprehensive Survey on Long Context Language Modeling☆161Updated this week
- ☆280Updated last month
- ☆796Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- AN O1 REPLICATION FOR CODING☆335Updated 7 months ago
- ☆64Updated 7 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆421Updated 8 months ago
- ☆728Updated last month
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 6 months ago
- ☆294Updated 11 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆244Updated 8 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆136Updated this week
- ☆319Updated 9 months ago
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆136Updated last week
- ☆89Updated last month
- slime is a LLM post-training framework aiming for RL Scaling.☆553Updated this week