FutureComputing4AI / Hrrformer
Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)
☆52Updated last year
Alternatives and similar repositories for Hrrformer:
Users that are interested in Hrrformer are comparing it to the libraries listed below
- Holographic Reduced Representations☆25Updated 4 months ago
- Few-shot Learning with Auxiliary Data☆27Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆22Updated 6 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆24Updated last month
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 9 months ago
- ☆33Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- The Energy Transformer block, in JAX☆56Updated last year
- Efficient PScan implementation in PyTorch☆16Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆24Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆23Updated 4 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 4 months ago
- 🧮 Algebraic Positional Encodings.☆11Updated 2 months ago
- ☆18Updated 9 months ago
- Official repository for the paper "Exploring the Promise and Limits of Real-Time Recurrent Learning" (ICLR 2024)☆11Updated last year
- ☆52Updated 5 months ago
- ☆47Updated last year
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated last year
- ☆45Updated last year
- ☆30Updated 5 months ago
- Stick-breaking attention☆48Updated last week
- ☆16Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- ☆27Updated 3 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆15Updated 3 years ago
- Google Research☆46Updated 2 years ago
- ☆13Updated 3 years ago