tjunlp-lab / M3KELinks
A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark
☆102Updated 2 years ago
Alternatives and similar repositories for M3KE
Users that are interested in M3KE are comparing it to the libraries listed below
Sorting:
- ☆97Updated last year
- ☆164Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆135Updated last year
- ☆128Updated 2 years ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- ☆146Updated last year
- ☆147Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆90Updated last year
- 中文 Instruction tuning datasets☆140Updated last year
- ☆281Updated last year
- ☆180Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- 中文大语言模型评测第二期☆71Updated 2 years ago
- 中文大语言模型评测第一期☆110Updated 2 years ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 8 months ago
- 语言模型中文认知能力分析☆237Updated 2 years ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆400Updated 4 months ago
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆196Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated 2 years ago
- llama2 finetuning with deepspeed and lora☆175Updated 2 years ago
- ☆172Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆239Updated 2 years ago
- 中文图书语料MD5链接☆217Updated last year
- ☆313Updated 2 years ago
- 文本去重☆76Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆418Updated 3 weeks ago
- LAiW: A Chinese Legal Large Language Models Benchmark☆85Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆254Updated 2 years ago
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆220Updated last year