thu-coai / BPOLinks
☆333Updated last year
Alternatives and similar repositories for BPO
Users that are interested in BPO are comparing it to the libraries listed below
Sorting:
- ☆234Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆412Updated 6 months ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago
- ☆147Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆167Updated 2 years ago
- ☆164Updated 2 years ago
- ☆280Updated 7 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆422Updated 2 months ago
- ☆129Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated 2 years ago
- llama2 finetuning with deepspeed and lora☆176Updated 2 years ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- Naive Bayes-based Context Extension☆326Updated last year
- ☆183Updated 2 years ago
- ☆147Updated last year
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆160Updated 7 months ago
- ☆69Updated 2 years ago
- ☆282Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆580Updated last year
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆162Updated 5 months ago
- [ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.☆409Updated last year
- 语言模型中文认知能力分析☆236Updated 2 years ago
- SOTA Math Opensource LLM☆333Updated 2 years ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆209Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆258Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆211Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Updated last year
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models☆517Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆103Updated 2 years ago