FlagAI-Open / OpenSeek
OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next-generation models that surpass DeepSeek.
☆124Updated this week
Alternatives and similar repositories for OpenSeek:
Users that are interested in OpenSeek are comparing it to the libraries listed below
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆145Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆166Updated last week
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆229Updated last month
- ☆124Updated 3 weeks ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆247Updated 3 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆237Updated 4 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆129Updated 8 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆260Updated 7 months ago
- ☆94Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆220Updated 3 weeks ago
- ☆139Updated 2 weeks ago
- ☆105Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- ☆60Updated 4 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆158Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆212Updated this week
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆147Updated 6 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆333Updated last month
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 6 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆177Updated last month
- Imitate OpenAI with Local Models☆88Updated 7 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 11 months ago
- A Comprehensive Survey on Long Context Language Modeling☆86Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- ☆142Updated 8 months ago
- ☆166Updated last month
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆180Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆368Updated last week