gao-xiao-bai / JsonTuningLinks
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
☆10Updated 10 months ago
Alternatives and similar repositories for JsonTuning
Users that are interested in JsonTuning are comparing it to the libraries listed below
Sorting:
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- Repo for Llatrieval☆31Updated last year
- ☆17Updated 6 months ago
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆42Updated last year
- Repository for Decomposed Prompting☆95Updated last year
- ☆88Updated 2 years ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- ☆45Updated last year
- ☆104Updated 2 months ago
- ☆56Updated last year
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆59Updated 2 years ago
- ☆53Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 4 months ago
- SysBench: Can Large Language Models Follow System Messages?☆35Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆50Updated 2 years ago
- ☆33Updated last year
- https://acl2023-retrieval-lm.github.io/☆155Updated last year
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated 2 years ago
- ☆67Updated 3 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆196Updated 2 years ago
- ☆59Updated 2 years ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆37Updated last year
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.☆51Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆52Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆42Updated last year
- URS Benchmark: Evaluating LLMs on User Reported Scenarios☆30Updated 3 months ago