OFA-Sys / OFA-CompressLinks
OFA-Compress is a unified framework which provides OFA model finetuning, distillation and inference capabilities in Huggingface version, and is committed to promoting the lightweighting of large models.
☆28Updated 2 years ago
Alternatives and similar repositories for OFA-Compress
Users that are interested in OFA-Compress are comparing it to the libraries listed below
Sorting:
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆87Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆39Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- ☆15Updated 3 years ago
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆166Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated 2 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 2 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated 2 years ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- ☆104Updated 3 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆127Updated 2 years ago
- ☆84Updated 2 years ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆205Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Official github repo of G-LLaVA☆146Updated 4 months ago
- ☆68Updated 2 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆37Updated 6 months ago
- ☆100Updated last year
- ☆50Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆163Updated 10 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago