xsysigma / TencentLLMEvalLinks
TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, data verification methods, and more.
☆38Updated 3 months ago
Alternatives and similar repositories for TencentLLMEval
Users that are interested in TencentLLMEval are comparing it to the libraries listed below
Sorting:
- Naive Bayes-based Context Extension☆326Updated 7 months ago
- ☆172Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆209Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- ☆59Updated last year
- 中文图书语料MD5链接☆216Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- 中文 Instruction tuning datasets☆132Updated last year
- 中文大语言模型评测第一期☆109Updated last year
- ☆162Updated 2 years ago
- EVA: Large-scale Pre-trained Chit-Chat Models☆307Updated 2 years ago
- ☆83Updated last year
- ☆281Updated last year
- A framework for cleaning Chinese dialog data☆272Updated 4 years ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆85Updated 8 months ago
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated 2 years ago
- This is the repository of the Ape210K dataset and baseline models.☆194Updated 5 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆130Updated last year
- T2Ranking: A large-scale Chinese benchmark for passage ranking.☆159Updated 2 years ago
- ☆128Updated 2 years ago
- Finetune CPM-2☆82Updated 2 years ago
- ☆96Updated last year
- 零样本学习测评基准,中文版☆56Updated 4 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆74Updated 2 years ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆263Updated last year
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆256Updated last year
- ☆97Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆102Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year