FutureComputing4AI / Hrrformer
Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)
☆47Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Hrrformer
- Holographic Reduced Representations☆21Updated 3 weeks ago
- ☆31Updated 10 months ago
- Adding new tasks to T0 without catastrophic forgetting☆30Updated 2 years ago
- Evaluation of neuro-symbolic engines☆33Updated 3 months ago
- ☆16Updated last year
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluating☆31Updated this week
- Google Research☆46Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆18Updated 2 months ago
- ☆44Updated last year
- Parallelizing non-linear sequential models over the sequence length☆45Updated 3 weeks ago
- Few-shot Learning with Auxiliary Data☆26Updated 11 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated 11 months ago
- ☆25Updated last month
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated last year
- ☆45Updated 9 months ago
- ☆25Updated 4 months ago
- [ICML 2023] Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning☆39Updated last year
- Stick-breaking attention☆34Updated 2 weeks ago
- ☆24Updated 3 years ago
- ☆45Updated 4 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆49Updated 3 months ago
- ☆46Updated last month
- ☆49Updated last year
- Universal Neurons in GPT2 Language Models☆27Updated 5 months ago
- An annotated implementation of the Hyena Hierarchy paper☆31Updated last year
- ☆39Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- ☆47Updated 9 months ago
- ☆50Updated last year
- Minimum Description Length probing for neural network representations☆16Updated this week