yanxiyue / vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
☆16Updated last year
Related projects ⓘ
Alternatives and complementary repositories for vllm
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated 8 months ago
- ☆62Updated last year
- ☆157Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆109Updated 5 months ago
- ☆173Updated last year
- ☆93Updated 8 months ago
- T2Ranking: A large-scale Chinese benchmark for passage ranking.☆152Updated last year
- ☆129Updated 4 months ago
- 中文大语言模型评测第一期☆106Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- ☆120Updated 7 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆107Updated last year
- ☆158Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆44Updated 7 months ago
- ☆91Updated 11 months ago
- 中文 Instruction tuning datasets☆118Updated 7 months ago
- ☆125Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆67Updated last week
- ☆82Updated last year
- TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, …☆38Updated 3 months ago
- Python ROUGE Score Implementation for Chinese Language Task (official rouge score)☆82Updated 4 months ago
- Naive Bayes-based Context Extension☆313Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆74Updated last year
- 文本去重☆67Updated 6 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆218Updated last year
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆189Updated last year
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated 2 years ago
- 怎么训练一个LLM分词器☆130Updated last year