thu-coai / BPOLinks
☆324Updated last year
Alternatives and similar repositories for BPO
Users that are interested in BPO are comparing it to the libraries listed below
Sorting:
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆383Updated last month
- ☆145Updated last year
- ☆163Updated 2 years ago
- ☆232Updated last year
- ☆260Updated 2 months ago
- ☆175Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆407Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆177Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆253Updated 2 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆89Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- ☆128Updated 2 years ago
- 语言模型中文认知能力分析☆237Updated last year
- SOTA Math Opensource LLM☆334Updated last year
- ☆145Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆268Updated 2 years ago
- ☆281Updated last year
- Naive Bayes-based Context Extension☆326Updated 8 months ago
- ☆67Updated 2 years ago
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆102Updated 2 years ago
- 1st Solution For Conversational Multi-Doc QA Workshop & International Challenge @ WSDM'24 - Xiaohongshu.Inc☆161Updated 3 weeks ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 3 months ago
- llama2 finetuning with deepspeed and lora☆176Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆566Updated 8 months ago
- ☆96Updated last year
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆157Updated 3 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆132Updated last year
- [ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.☆406Updated 7 months ago
- ☆172Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year