☆922May 22, 2024Updated last year
Alternatives and similar repositories for PandaLM
Users that are interested in PandaLM are comparing it to the libraries listed below
Sorting:
- ☆117Jun 13, 2023Updated 2 years ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,591Jun 3, 2025Updated 9 months ago
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,952Nov 26, 2023Updated 2 years ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,284Oct 16, 2024Updated last year
- Generative Judge for Evaluating Alignment☆248Jan 18, 2024Updated 2 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Aug 4, 2024Updated last year
- ☆771Jun 13, 2024Updated last year
- Instruction Tuning with GPT-4☆4,338Jun 11, 2023Updated 2 years ago
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,055Apr 14, 2024Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,800Dec 12, 2023Updated 2 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Sep 18, 2025Updated 6 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,824Jul 27, 2025Updated 7 months ago
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)☆2,805Mar 13, 2024Updated 2 years ago
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,677Jul 18, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,591Nov 24, 2025Updated 3 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- The repository for paper <Evaluating Open-QA Evaluation>☆25Apr 9, 2024Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,610Aug 30, 2023Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,587Mar 27, 2023Updated 2 years ago
- [ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models☆39Jul 19, 2024Updated last year
- Resource, Evaluation and Detection Papers for ChatGPT☆456Mar 21, 2024Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Mar 24, 2024Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,490Feb 15, 2026Updated last month
- Example models using DeepSpeed☆6,807Mar 4, 2026Updated 2 weeks ago
- An open-source tool-augmented conversational language model from Fudan University☆12,096Jul 13, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,827Jun 17, 2025Updated 9 months ago
- Panda项目是于2023年5月启动的开源海外中文大语言模型项目,致力于大模型时代探索整个技术栈,旨在推动中文自然语言处理领域的创新和合作。☆1,036Oct 19, 2023Updated 2 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,664Jul 25, 2023Updated 2 years ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,102Jun 1, 2023Updated 2 years ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,765Updated this week
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,479Jun 7, 2025Updated 9 months ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,136Feb 27, 2024Updated 2 years ago
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆416Jun 1, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- ☆19May 25, 2024Updated last year