InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
☆420Aug 21, 2025Updated 8 months ago
Alternatives and similar repositories for InternEvo
Users that are interested in InternEvo are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- InternEvo is a high-performance training system for giant models.☆38Jan 17, 2024Updated 2 years ago
- PyTorch Sphinx Theme☆35Jan 3, 2024Updated 2 years ago
- ☆36Sep 21, 2025Updated 7 months ago
- ☆49Jul 12, 2023Updated 2 years ago
- ☆178Mar 12, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- LLM Group Chat Framework: chat with multiple LLMs at the same time. 大模型群聊框架:同时与多个大语言模型聊天。☆323Jun 19, 2025Updated 10 months ago
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆360Mar 22, 2024Updated 2 years ago
- Enhance LLM agents with rich tool APIs☆410Sep 13, 2024Updated last year
- State-of-the-art bilingual open-sourced Math reasoning LLMs.☆543Oct 22, 2024Updated last year
- ☆900Jun 7, 2023Updated 2 years ago
- A lightweight framework for building LLM-based agents☆2,242Updated this week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,127Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,199Oct 30, 2025Updated 6 months ago
- Ring attention implementation with flash attention☆1,014Sep 10, 2025Updated 7 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,924May 26, 2025Updated 11 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,823Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance☆2,487Nov 24, 2025Updated 5 months ago
- Best practice for training LLaMA models in Megatron-LM☆664Jan 2, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆127Jan 14, 2025Updated last year
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆192Mar 20, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,009Mar 3, 2026Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 3 months ago
- LLM&VLM Tutorial☆1,952Apr 22, 2026Updated last week
- Large Context Attention☆770Oct 13, 2025Updated 6 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,939Apr 20, 2026Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,295Aug 28, 2025Updated 8 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,291Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- An experimental desktop client for using Claude Desktop's MCP with Novelcrafter codices.☆11Dec 3, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,414Apr 22, 2026Updated last week
- A benchmark suited especially for deep learning operators☆42Feb 13, 2023Updated 3 years ago
- 本项目主要对开源的MOSS SFT数据进行整理 ,转换成mnbvc多轮对话格式。MOSS-003涵盖用性、忠实性、无害性三个层面,共353w样本,MOSS-003 包含更细粒度的有用性类别标记、更广泛的无害性数据和更长对话轮数,共630w样本,☆12Dec 3, 2023Updated 2 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,498Updated this week
- Ongoing research training transformer models at scale☆16,145Updated this week
- Minimalistic large language model 3D-parallelism training☆2,663Apr 7, 2026Updated 3 weeks ago