InternLM / InternLM-Math
State-of-the-art bilingual open-sourced Math reasoning LLMs.
☆463Updated 2 months ago
Alternatives and similar repositories for InternLM-Math:
Users that are interested in InternLM-Math are comparing it to the libraries listed below
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆337Updated 9 months ago
- Enhance LLM agents with rich tool APIs☆364Updated 4 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆326Updated this week
- ☆49Updated last year
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆256Updated 9 months ago
- ☆37Updated 5 months ago
- LLM Group Chat Framework: chat with multiple LLMs at the same time. 大模型群聊框架:同时与多个大语言模型聊天。☆261Updated 9 months ago
- InternEvo is a high-performance training system for giant models.☆37Updated last year
- ☆264Updated 5 months ago
- PyTorch Sphinx Theme☆35Updated last year
- A series of technical report on Slow Thinking with LLM☆297Updated last week
- A lightweight framework for building LLM-based agents☆1,978Updated this week
- ☆295Updated last month
- ☆300Updated 3 months ago
- ☆902Updated 6 months ago
- Large Reasoning Models☆787Updated last month
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆263Updated last month
- ☆206Updated 8 months ago
- ☆432Updated 2 weeks ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆491Updated 7 months ago
- ☆900Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,083Updated last year
- SOTA Math Opensource LLM☆329Updated last year
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆401Updated 2 months ago
- ☆812Updated last week
- 万卷1.0多模态语料☆555Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)☆355Updated 4 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆521Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆687Updated 3 months ago
- ☆247Updated 5 months ago