InternLM / InternLM-WQXLinks
☆19Updated last year
Alternatives and similar repositories for InternLM-WQX
Users that are interested in InternLM-WQX are comparing it to the libraries listed below
Sorting:
- a-m-team's exploration in large language modeling☆192Updated 5 months ago
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆355Updated last year
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆297Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆411Updated 3 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 8 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- [ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"☆254Updated 7 months ago
- ☆235Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆265Updated 9 months ago
- PDF解析工具:GOT的vLLM加速实现,MinerU做布局识别裁剪、GOT做表格公式解析,实现RAG中的pdf解析☆66Updated last year
- An automated pipeline for evaluating LLMs for role-playing.☆202Updated last year
- ☆205Updated 3 weeks ago
- ☆103Updated this week
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆145Updated 4 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆423Updated 6 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆160Updated last month
- ☆77Updated 9 months ago
- ☆142Updated last year
- ☆180Updated 2 years ago
- ☆314Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆437Updated 3 weeks ago
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆86Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 9 months ago
- GLM Series Edge Models☆154Updated 5 months ago
- Max的有趣数据集 / Max's awesome datasets☆52Updated 2 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆404Updated last week
- ☆186Updated 9 months ago
- Collect every awesome work about r1!☆421Updated 6 months ago