xsysigma / TencentLLMEval
TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, data verification methods, and more.
☆38Updated 5 months ago
Alternatives and similar repositories for TencentLLMEval:
Users that are interested in TencentLLMEval are comparing it to the libraries listed below
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆74Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated 10 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆74Updated 3 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆116Updated 8 months ago
- 中文大语言模型评测第二期☆70Updated last year
- ☆53Updated 2 years ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆52Updated last year
- T2Ranking: A large-scale Chinese benchmark for passage ranking.☆153Updated last year
- ☆96Updated 10 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆95Updated 2 months ago
- ☆173Updated last year
- ☆93Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆100Updated last year
- 中文 Instruction tuning datasets☆125Updated 10 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆61Updated this week
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆46Updated 9 months ago
- ☆15Updated 11 months ago
- ☆125Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调 即可推理更长文本☆47Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆203Updated last year
- ☆62Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆64Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- ☆129Updated 10 months ago
- ☆159Updated last year
- 中文图书语料MD5链接☆213Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆238Updated last year
- ☆84Updated last year