MicPie / claspLinks
CLASP - Contrastive Language-Aminoacid Sequence Pretraining
☆143Updated 3 years ago
Alternatives and similar repositories for clasp
Users that are interested in clasp are comparing it to the libraries listed below
Sorting:
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 4 years ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆52Updated 5 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆224Updated 3 years ago
- Big-Interleaved-Dataset☆58Updated 2 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 4 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆83Updated 3 years ago
- ☆131Updated 2 years ago
- Reliably download millions of images efficiently☆117Updated 4 years ago
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated 2 years ago
- ☆47Updated 3 months ago
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆166Updated 2 years ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"☆190Updated 4 years ago
- A unified framework to jointly model images, text, and human attention traces.☆79Updated 4 years ago
- ☆65Updated 3 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 4 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- PyTorch code for EMNLP 2020 paper "X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers"☆50Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆120Updated 4 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 3 years ago
- Multitask Multilingual Multimodal Pre-training☆72Updated 2 years ago
- Data and code for CVPR 2020 paper: "VIOLIN: A Large-Scale Dataset for Video-and-Language Inference"☆162Updated 5 years ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆94Updated last year
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆78Updated 3 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆82Updated last year
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 3 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago