lucidrains / esbn-transformer
An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols
☆14Updated 3 years ago
Alternatives and similar repositories for esbn-transformer:
Users that are interested in esbn-transformer are comparing it to the libraries listed below
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆23Updated 4 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- ☆21Updated last year
- Minimum Description Length probing for neural network representations☆18Updated last week
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- ☆29Updated 2 years ago
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Updated last year
- ☆12Updated 2 years ago
- Exploring Few-Shot Adaptation of Language Models with Tables☆23Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 3 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated last year
- ☆18Updated 7 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated 10 months ago
- ☆32Updated last year
- Embedding Recycling for Language models☆38Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Updated last year
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- ☆16Updated last year
- 📰 Computing the information content of trained neural networks☆21Updated 3 years ago
- ☆11Updated 2 years ago