allenai / catwalkLinks
This project studies the performance and robustness of language models and task-adaptation methods.
☆149Updated last year
Alternatives and similar repositories for catwalk
Users that are interested in catwalk are comparing it to the libraries listed below
Sorting:
- A framework for few-shot evaluation of autoregressive language models.☆104Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆164Updated 8 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆221Updated 7 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆144Updated 7 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆129Updated last year
- ☆38Updated last year
- DSIR large-scale data selection framework for language model training☆251Updated last year
- Scalable training for dense retrieval models.☆298Updated last week
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 4 months ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated last month
- A unified benchmark for math reasoning☆88Updated 2 years ago
- ☆72Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- Scaling Data-Constrained Language Models☆335Updated 9 months ago
- ☆134Updated 5 months ago
- ☆100Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- SILO Language Models code repository☆81Updated last year
- ☆97Updated 2 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 9 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆201Updated last week
- ☆180Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- ☆159Updated 2 years ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆137Updated 7 months ago
- Token-level Reference-free Hallucination Detection☆94Updated last year
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated last year