xsysigma / TencentLLMEvalLinks
TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, data verification methods, and more.
☆39Updated 6 months ago
Alternatives and similar repositories for TencentLLMEval
Users that are interested in TencentLLMEval are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆210Updated last year
- 中文图书语料MD5链接☆217Updated last year
- Naive Bayes-based Context Extension☆326Updated 9 months ago
- 中文 Instruction tuning datasets☆135Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated last year
- ☆163Updated 2 years ago
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆101Updated 2 years ago
- ☆98Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆134Updated last year
- ☆172Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- A framework for cleaning Chinese dialog data☆273Updated 4 years ago
- ☆281Updated last year
- T2Ranking: A large-scale Chinese benchmark for passage ranking.☆160Updated 2 years ago
- ☆84Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- 中文大语言 模型评测第一期☆110Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆89Updated 10 months ago
- ☆96Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated 2 years ago
- ☆460Updated last year
- ☆67Updated 2 years ago
- ☆59Updated 2 years ago
- ☆127Updated 2 years ago
- EVA: Large-scale Pre-trained Chit-Chat Models☆307Updated 2 years ago
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆262Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆51Updated 2 years ago
- ☆308Updated 2 years ago