glassroom / heinsen_routingLinks
Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
☆172Updated 2 years ago
Alternatives and similar repositories for heinsen_routing
Users that are interested in heinsen_routing are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "A Modern Self-Referential Weight Matrix That Learns to Modify Itself" (ICML 2022 & NeurIPS 2021 Deep R…☆171Updated 4 months ago
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆95Updated 11 months ago
- ☆254Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Fast Text Classification with Compressors dictionary☆149Updated 2 years ago
- A repository for log-time feedforward networks☆222Updated last year
- Amos optimizer with JEstimator lib.☆82Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆634Updated 2 years ago
- Automatic gradient descent☆215Updated 2 years ago
- ☆144Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆222Updated last year
- Language Modeling with the H3 State Space Model☆518Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- [NAACL 2025] Official Implementation of "HMT: Hierarchical Memory Transformer for Long Context Language Processing"☆75Updated 4 months ago
- A playground to make it easy to try crazy things☆33Updated 3 weeks ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆320Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last month
- A tool to analyze and debug neural networks in pytorch. Use a GUI to traverse the computation graph and view the data from many different…☆291Updated 11 months ago
- Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetli…☆158Updated 2 months ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- Python Research Framework☆106Updated 3 years ago
- ☆40Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago