google-research / scenicLinks
Scenic: A Jax Library for Computer Vision Research and Beyond
☆3,759Updated 3 weeks ago
Alternatives and similar repositories for scenic
Users that are interested in scenic are comparing it to the libraries listed below
Sorting:
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,339Updated 8 months ago
- Grounded Language-Image Pre-training☆2,569Updated 2 years ago
- Official DeiT repository☆4,318Updated last year
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,822Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,639Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,691Updated this week
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,424Updated last year
- Code release for ConvNeXt model☆6,280Updated 3 years ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,050Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,195Updated 2 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,824Updated last year
- Code release for ConvNeXt V2 model☆1,952Updated last year
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,664Updated 2 years ago
- OpenMMLab Self-Supervised Learning Toolbox and Benchmark☆3,298Updated 2 years ago
- This is an official implementation for "Video Swin Transformers".☆1,624Updated 2 years ago
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.☆3,294Updated last year
- A deep learning library for video understanding research.☆3,538Updated 2 weeks ago
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,786Updated 10 months ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.☆2,221Updated 2 weeks ago
- An open source implementation of CLIP.☆13,293Updated 2 months ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,552Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,126Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,463Updated 7 months ago
- Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"☆3,231Updated last year
- Recent Transformer-based CV and related works.☆1,339Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,166Updated last year
- ☆12,260Updated 2 weeks ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,231Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,646Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,651Updated 2 years ago