FlagAI-Open / OpenSeekLinks
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models.
☆241Updated 2 weeks ago
Alternatives and similar repositories for OpenSeek
Users that are interested in OpenSeek are comparing it to the libraries listed below
Sorting:
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆223Updated 6 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆204Updated 2 months ago
- ☆209Updated 3 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆520Updated this week
- ☆182Updated 9 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆238Updated 8 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆468Updated 8 months ago
- A Comprehensive Survey on Long Context Language Modeling☆226Updated 2 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆283Updated 11 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆265Updated 7 months ago
- a-m-team's exploration in large language modeling☆195Updated 8 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆447Updated 3 months ago
- a toolkit on knowledge distillation for large language models☆266Updated this week
- ☆520Updated last month
- AN O1 REPLICATION FOR CODING☆334Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- ☆814Updated 8 months ago
- ☆762Updated last month
- LLaMA Factory Document☆164Updated last week
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆253Updated 5 months ago
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- ☆68Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Updated last year
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆283Updated 11 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆256Updated last year
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆146Updated 2 weeks ago
- ☆320Updated last year
- ☆322Updated last year
- 青稞Talk☆190Updated 2 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.☆204Updated last year