allenai / mmc4Links
MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.
ā936Updated 4 months ago
Alternatives and similar repositories for mmc4
Users that are interested in mmc4 are comparing it to the libraries listed below
Sorting:
- š§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".ā482Updated last year
- DataComp: In search of the next generation of multimodal datasetsā731Updated 3 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"ā521Updated last year
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the uā¦ā775Updated last year
- Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".ā683Updated 10 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333ā1,119Updated last year
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchā1,256Updated 2 years ago
- ā906Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.ā1,148Updated 3 months ago
- LOMO: LOw-Memory Optimizationā989Updated last year
- Official repo for MM-REACTā954Updated last year
- MINT-1T: A one trillion token multimodal interleaved dataset.ā821Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"ā861Updated 3 months ago
- ā621Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.ā569Updated last year
- Multimodal-GPTā1,505Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.ā822Updated last year
- Salesforce open-source LLMs with 8k sequence length.ā721Updated 6 months ago
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.ā352Updated 2 weeks ago
- Official Repository of ChatCaptionerā464Updated 2 years ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorchā647Updated 7 months ago
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal ā¦ā363Updated last year
- Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"ā1,704Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"ā1,061Updated last year
- Emu Series: Generative Multimodal Models from BAAIā1,741Updated 10 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionā645Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"ā269Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.ā48Updated 2 years ago
- Inference code for Persimmon-8Bā415Updated last year
- Research Trends in LLM-guided Multimodal Learning.ā357Updated last year