allenai / catwalkLinks
This project studies the performance and robustness of language models and task-adaptation methods.
☆150Updated last year
Alternatives and similar repositories for catwalk
Users that are interested in catwalk are comparing it to the libraries listed below
Sorting:
- Pretraining Efficiently on S2ORC!☆165Updated 9 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆222Updated 8 months ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Scalable training for dense retrieval models.☆299Updated last month
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆131Updated last year
- ☆138Updated 6 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 9 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated last month
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 6 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆185Updated 3 weeks ago
- Scaling Data-Constrained Language Models☆338Updated last month
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆140Updated 8 months ago
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated 2 weeks ago
- ☆39Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- DSIR large-scale data selection framework for language model training☆257Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆181Updated 2 years ago
- A unified benchmark for math reasoning☆88Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆245Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 7 months ago
- ☆72Updated 2 years ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆163Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆233Updated 8 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Self-Alignment with Principle-Following Reward Models☆162Updated 2 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- ☆152Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 10 months ago
- ☆180Updated 2 years ago