huggingface / OBELICSLinks
Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
☆207Updated 11 months ago
Alternatives and similar repositories for OBELICS
Users that are interested in OBELICS are comparing it to the libraries listed below
Sorting:
- M4 experiment logbook☆58Updated 2 years ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆139Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆263Updated 8 months ago
- Multimodal language model benchmark, featuring challenging examples☆173Updated 8 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆212Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Matryoshka Multimodal Models☆113Updated 7 months ago
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆166Updated 2 years ago
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆79Updated 2 years ago
- ☆50Updated last year
- LL3M: Large Language and Multi-Modal Model in Jax☆73Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆307Updated 7 months ago
- Big-Interleaved-Dataset☆58Updated 2 years ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆270Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆65Updated last year
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆101Updated 6 months ago
- ☆228Updated last year
- Official github repo of G-LLaVA☆146Updated 6 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 11 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last week
- Scaling Data-Constrained Language Models☆339Updated last month
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆159Updated 10 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆210Updated last week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆225Updated 5 months ago
- Self-Alignment with Principle-Following Reward Models☆163Updated 3 months ago
- Open LLaMA Eyes to See the World☆174Updated 2 years ago
- a family of highly capabale yet efficient large multimodal models☆187Updated last year