lucidrains / mirasol-pytorch
Implementation of π» Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch
β87Updated 8 months ago
Related projects: β
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorchβ95Updated 11 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsβ101Updated last year
- Implementation of Infini-Transformer in Pytorchβ100Updated last month
- Implementation of a multimodal diffusion transformer in Pytorchβ92Updated 2 months ago
- Language Quantized AutoEncodersβ94Updated last year
- Explorations into the recently proposed Taylor Series Linear Attentionβ85Updated last month
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ233Updated 4 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorchβ97Updated 11 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β115Updated last month
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonettoβ53Updated 4 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountβ¦β50Updated 10 months ago
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmindβ51Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindβ105Updated 3 weeks ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)β62Updated 11 months ago
- β42Updated this week
- Implementation of GateLoop Transformer in Pytorch and Jaxβ86Updated 3 months ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"β56Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.β63Updated last month
- M4 experiment logbookβ56Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformersβ78Updated 6 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"β186Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmindβ161Updated last week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"β87Updated 8 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"β94Updated last month
- Language models scale reliably with over-training and on downstream tasksβ91Updated 5 months ago
- Implementation of Agent Attention in Pytorchβ83Updated 2 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modelingβ153Updated last week
- Implementation of Discrete Key / Value Bottleneck, in Pytorchβ87Updated last year
- β64Updated 11 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE β¦β113Updated 6 months ago