huggingface / OBELICSLinks
Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
☆202Updated 9 months ago
Alternatives and similar repositories for OBELICS
Users that are interested in OBELICS are comparing it to the libraries listed below
Sorting:
- M4 experiment logbook☆57Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆208Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆136Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Multimodal language model benchmark, featuring challenging examples☆167Updated 5 months ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆157Updated last year
- Matryoshka Multimodal Models☆107Updated 4 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆249Updated 5 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last month
- LL3M: Large Language and Multi-Modal Model in Jax☆72Updated last year
- ☆133Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- DSIR large-scale data selection framework for language model training☆249Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆162Updated 11 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆301Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆207Updated this week
- a family of highly capabale yet efficient large multimodal models☆183Updated 9 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆267Updated 11 months ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆137Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆710Updated last month
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆112Updated 8 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated 8 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆215Updated 2 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆340Updated 4 months ago
- Official github repo of G-LLaVA☆138Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 11 months ago