InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
☆419Aug 21, 2025Updated 7 months ago
Alternatives and similar repositories for InternEvo
Users that are interested in InternEvo are comparing it to the libraries listed below
Sorting:
- InternEvo is a high-performance training system for giant models.☆38Jan 17, 2024Updated 2 years ago
- PyTorch Sphinx Theme☆35Jan 3, 2024Updated 2 years ago
- ☆36Sep 21, 2025Updated 6 months ago
- ☆49Jul 12, 2023Updated 2 years ago
- ☆176Mar 12, 2024Updated 2 years ago
- LLM Group Chat Framework: chat with multiple LLMs at the same time. 大模型群聊框架:同时与多个大语言模型聊天。☆323Jun 19, 2025Updated 9 months ago
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆359Mar 22, 2024Updated last year
- Enhance LLM agents with rich tool APIs☆405Sep 13, 2024Updated last year
- State-of-the-art bilingual open-sourced Math reasoning LLMs.☆543Oct 22, 2024Updated last year
- ☆902Jun 7, 2023Updated 2 years ago
- A lightweight framework for building LLM-based agents☆2,231Updated this week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,104Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,172Oct 30, 2025Updated 4 months ago
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,923May 26, 2025Updated 9 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆649Jan 15, 2026Updated 2 months ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance☆2,481Nov 24, 2025Updated 3 months ago
- Best practice for training LLaMA models in Megatron-LM☆664Jan 2, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆127Jan 14, 2025Updated last year
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆193Mar 20, 2025Updated last year
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 2 months ago
- LLM&VLM Tutorial☆1,943May 5, 2025Updated 10 months ago
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 7 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,765Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- An experimental desktop client for using Claude Desktop's MCP with Novelcrafter codices.☆10Dec 3, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- A benchmark suited especially for deep learning operators☆42Feb 13, 2023Updated 3 years ago
- 本项目主要对开源的MOSS SFT数据进行整理 ,转换成mnbvc多轮对话格式。MOSS-003涵盖用性、忠实性、无害性三 个层面,共353w样本,MOSS-003 包含更细粒度的有用性类别标记、更广泛的无害性数据和更长对话轮数,共630w样本,☆12Dec 3, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week