lucidrains / MaMMUT-pytorch
Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch
β97Updated last year
Related projects β
Alternatives and complementary repositories for MaMMUT-pytorch
- Implementation of π» Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorchβ88Updated 10 months ago
- β64Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"β96Updated 2 months ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorchβ95Updated last year
- Official code for "TOAST: Transfer Learning via Attention Steering"β186Updated last year
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496β81Updated 3 months ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023β135Updated last year
- JAX implementation ViT-VQGANβ77Updated 2 years ago
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"β110Updated 3 months ago
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.β30Updated 4 months ago
- Official repository of paper "Subobject-level Image Tokenization"β62Updated 6 months ago
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)β161Updated last month
- Language Quantized AutoEncodersβ94Updated last year
- Video descriptions of research papers relating to foundation models and scalingβ30Updated last year
- Data-Efficient Multimodal Fusion on a Single GPUβ47Updated 6 months ago
- FuseCap: Large Language Model for Visual Data Fusion in Enriched Caption Generationβ49Updated 7 months ago
- Official repository for the General Robust Image Task (GRIT) Benchmarkβ50Updated last year
- β48Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddingsβ44Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.β88Updated 7 months ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"β298Updated 5 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)β153Updated 11 months ago
- M4 experiment logbookβ56Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.orβ¦β107Updated 4 months ago
- https://arxiv.org/abs/2209.15162β48Updated last year
- [CVPR 2023] Learning Visual Representations via Language-Guided Samplingβ145Updated last year
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"β54Updated last year
- Matryoshka Multimodal Modelsβ82Updated this week
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Trainingβ132Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Leaβ¦β98Updated 6 months ago