allenai / bffLinks
☆38Updated last year
Alternatives and similar repositories for bff
Users that are interested in bff are comparing it to the libraries listed below
Sorting:
- ☆72Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆57Updated 2 years ago
- ☆72Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆50Updated last month
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated 3 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago
- ☆51Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 9 months ago
- ☆44Updated 6 months ago
- ☆29Updated 10 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- ☆34Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 8 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- Python package for serving a local search engine. One command to download and serve a datastore---that's it 😎.☆14Updated 3 weeks ago
- ☆50Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- ☆11Updated 11 months ago
- ☆24Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- ☆48Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆128Updated 4 months ago
- Few-shot Learning with Auxiliary Data☆27Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated last year