google-research / scenic
Scenic: A Jax Library for Computer Vision Research and Beyond
☆3,534Updated last week
Alternatives and similar repositories for scenic
Users that are interested in scenic are comparing it to the libraries listed below
Sorting:
- Official DeiT repository☆4,194Updated last year
- Code release for ConvNeXt model☆5,993Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,843Updated last month
- ☆11,326Updated 2 months ago
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆6,831Updated 10 months ago
- Grounded Language-Image Pre-training☆2,394Updated last year
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.☆3,283Updated last year
- [ICCV2023 Best Paper Finalist] PyTorch implementation of DiffusionDet (https://arxiv.org/abs/2211.09788)☆2,167Updated 2 years ago
- Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"☆2,783Updated 9 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,479Updated 9 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,132Updated last year
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,679Updated last year
- This is a collection of our NAS and Vision Transformer work.☆1,755Updated 9 months ago
- Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)☆3,484Updated 4 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,594Updated last week
- This is an official implementation for "Video Swin Transformers".☆1,543Updated 2 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,493Updated last year
- CVNets: A library for training computer vision networks☆1,862Updated last year
- PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722☆4,971Updated last week
- PyTorch implementation of MAE https//arxiv.org/abs/2111.06377☆7,786Updated 9 months ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.☆2,125Updated 5 months ago
- An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites☆4,853Updated 9 months ago
- Code release for ConvNeXt V2 model☆1,725Updated 8 months ago
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆14,729Updated 9 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆979Updated last year
- OpenMMLab Self-Supervised Learning Toolbox and Benchmark☆3,257Updated last year
- SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners☆4,265Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,051Updated 10 months ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,298Updated 2 weeks ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,501Updated last year