math-eval / TAL-SCQ5KLinks
☆151Updated 2 years ago
Alternatives and similar repositories for TAL-SCQ5K
Users that are interested in TAL-SCQ5K are comparing it to the libraries listed below
Sorting:
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- SOTA Math Opensource LLM☆333Updated last year
- deep learning☆148Updated 5 months ago
- Gaokao Benchmark for AI☆108Updated 3 years ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated 3 months ago
- ☆147Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆95Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- ☆83Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- ☆128Updated 2 years ago
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆137Updated 2 years ago
- Official Pytorch Implementation for MathGLM☆327Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆98Updated last year
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- ☆330Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆93Updated last year
- LingoWhale-8B: Open Bilingual LLMs | 开源双语预训练大模型☆143Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆102Updated 2 years ago
- Imitate OpenAI with Local Models☆88Updated last year
- A large-scale language model for scientific domain, trained on redpajama arXiv split☆136Updated last year
- 国内首个全参数训练的法律大模型 HanFei-1.0 (韩非)☆124Updated 2 years ago
- 中文原生检索增强生成测评基准☆123Updated last year
- ☆234Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- 基于ChatGLM2-6B进行微调,包括全参数、参数有效性、量化感知训练等,可实现指令微调、多轮对话微调等。☆26Updated 2 years ago
- Just for debug☆56Updated last year