srush / transformers-betLinks
☆12Updated 3 years ago
Alternatives and similar repositories for transformers-bet
Users that are interested in transformers-bet are comparing it to the libraries listed below
Sorting:
- Exploring Few-Shot Adaptation of Language Models with Tables☆24Updated 3 years ago
- This repo contains code to reproduce some of the results presented in the paper "SentenceMIM: A Latent Variable Language Model"☆28Updated 3 years ago
- ☆41Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Updated 4 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- ☆46Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- Variable-order CRFs with structure learning☆16Updated last year
- Learning to Model Editing Processes☆26Updated 3 weeks ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transfer☆39Updated 4 years ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- ☆29Updated 3 years ago
- ☆44Updated 4 years ago
- Implementation for "Rational Recurrences", Peng et al., EMNLP 2018.☆28Updated 3 years ago
- Helper scripts and notes that were used while porting various nlp models☆47Updated 3 years ago
- AAAI 2022 Paper: Bet even Beth Harmon couldn't learn chess like that :)☆38Updated 4 years ago
- ☆38Updated 4 years ago
- Standalone pre-training recipe with JAX+Flax☆32Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago