huggingface / OBELICS
Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
☆197Updated 6 months ago
Alternatives and similar repositories for OBELICS:
Users that are interested in OBELICS are comparing it to the libraries listed below
- M4 experiment logbook☆57Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆82Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆205Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆234Updated 3 months ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆166Updated last year
- Scaling Data-Constrained Language Models☆333Updated 6 months ago
- Multimodal language model benchmark, featuring challenging examples☆160Updated 3 months ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆131Updated last year
- Matryoshka Multimodal Models☆98Updated 2 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆195Updated this week
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆156Updated 11 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆290Updated 2 months ago
- ☆49Updated last year
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- LL3M: Large Language and Multi-Modal Model in Jax☆70Updated 11 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆135Updated 5 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆130Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- ☆83Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last month
- ☆133Updated last year
- ☆64Updated last year
- a family of highly capabale yet efficient large multimodal models☆178Updated 7 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆208Updated this week
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆257Updated 8 months ago
- ☆125Updated last year
- DSIR large-scale data selection framework for language model training☆244Updated 11 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆332Updated 2 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆164Updated last week
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 8 months ago