kaistAI / CoT-Collection
[EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
☆224Updated last year
Alternatives and similar repositories for CoT-Collection:
Users that are interested in CoT-Collection are comparing it to the libraries listed below
- DSIR large-scale data selection framework for language model training☆242Updated 9 months ago
- ☆269Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆131Updated 3 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆157Updated 8 months ago
- Simple next-token-prediction for RLHF☆222Updated last year
- ☆250Updated last year
- Data and Code for Program of Thoughts (TMLR 2023)☆257Updated 8 months ago
- All available datasets for Instruction Tuning of Large Language Models☆241Updated last year
- ☆172Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆208Updated 2 months ago
- A Survey on Data Selection for Language Models☆203Updated 3 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆293Updated 4 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆327Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆450Updated 10 months ago
- Self-Alignment with Principle-Following Reward Models☆152Updated 11 months ago
- ☆304Updated 7 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆156Updated last month
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆148Updated 11 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆249Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆327Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.