lucidrains / zorro-pytorchLinks
Implementation of Zorro, Masked Multimodal Transformer, in Pytorch
โ97Updated last year
Alternatives and similar repositories for zorro-pytorch
Users that are interested in zorro-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of ๐ป Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorchโ89Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorchโ103Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsโ120Updated 10 months ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorchโ88Updated 2 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)โ77Updated last year
- Implementation of Infini-Transformer in Pytorchโ111Updated 7 months ago
- Language Quantized AutoEncodersโ109Updated 2 years ago
- Implementation of Agent Attention in Pytorchโ91Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorchโ101Updated 2 years ago
- [TMLR 2022] High-Modality Multimodal Transformerโ117Updated 9 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonettoโ56Updated last year
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wiโฆโ51Updated 3 years ago
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorchโ100Updated last week
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arenaโ205Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"โ58Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802โ95Updated 2 years ago
- A simple cross attention that updates both the source and target in one stepโ176Updated 3 weeks ago
- A Domain-Agnostic Benchmark for Self-Supervised Learningโ107Updated 2 years ago
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmindโ67Updated 11 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchโ313Updated 4 months ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorchโ119Updated 4 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesโ78Updated 3 years ago
- Implementation of Block Recurrent Transformer - Pytorchโ220Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorchโ229Updated 11 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancyโ70Updated last year
- Video descriptions of research papers relating to foundation models and scalingโ31Updated 2 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)โ33Updated 2 years ago
- A repository to house some personal attempts to beat some state-of-the-art for medical datasetsโ99Updated last year
- Implementation of a multimodal diffusion transformer in Pytorchโ103Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated last year