thu-coai / BPOLinks
☆331Updated last year
Alternatives and similar repositories for BPO
Users that are interested in BPO are comparing it to the libraries listed below
Sorting:
- ☆147Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆410Updated 5 months ago
- ☆234Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Updated 2 years ago
- ☆164Updated 2 years ago
- ☆129Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆423Updated last month
- LongQLoRA: Extent Context Length of LLMs Efficiently☆167Updated 2 years ago
- ☆278Updated 6 months ago
- SOTA Math Opensource LLM☆332Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 7 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆159Updated 7 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- Naive Bayes-based Context Extension☆325Updated last year
- ☆181Updated 2 years ago
- ☆146Updated last year
- ☆282Updated last year
- llama2 finetuning with deepspeed and lora☆175Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆579Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆103Updated 2 years ago
- ☆98Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- an intro to retrieval augmented large language model☆304Updated 2 years ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆301Updated last year
- [ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.☆409Updated 11 months ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- 语言模型中文认知能力分析☆236Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- Imitate OpenAI with Local Models☆89Updated last year