invictus717 / MetaTransformer
Meta-Transformer for Unified Multimodal Learning
β1,572Updated last year
Alternatives and similar repositories for MetaTransformer:
Users that are interested in MetaTransformer are comparing it to the libraries listed below
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"β1,175Updated last year
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,233Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAIβ1,691Updated 5 months ago
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toβ¦β1,016Updated 5 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,427Updated 7 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,018Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.β954Updated last year
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ620Updated 4 months ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"β2,502Updated 7 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ789Updated 11 months ago
- Painter & SegGPT Series: Vision Foundation Models from BAAIβ2,554Updated 3 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"β2,048Updated 2 weeks ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,545Updated this week
- Grounded Language-Image Pre-trainingβ2,338Updated last year
- Multimodal-GPTβ1,493Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,179Updated 8 months ago
- β1,794Updated 8 months ago
- Official code for VisProg (CVPR 2023 Best Paper!)β707Updated 6 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ726Updated last year
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understandingβ912Updated 7 months ago
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Modelβ3,248Updated 3 weeks ago
- VisionLLM Seriesβ1,013Updated last week
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,359Updated 2 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and languageβ1,305Updated last year
- β770Updated 6 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"β1,401Updated last year
- β766Updated 7 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"β4,512Updated 6 months ago
- β500Updated 3 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β835Updated 3 months ago