allenai / mmc4Links
MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.
ā930Updated 2 months ago
Alternatives and similar repositories for mmc4
Users that are interested in mmc4 are comparing it to the libraries listed below
Sorting:
- š§ Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".ā482Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"ā519Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333ā1,108Updated last year
- [NIPS2023] RRHF & Wombatā809Updated last year
- LOMO: LOw-Memory Optimizationā984Updated 11 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorchā642Updated 5 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the uā¦ā771Updated last year
- Inference code for Persimmon-8Bā415Updated last year
- DataComp: In search of the next generation of multimodal datasetsā710Updated last month
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"ā1,059Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructionsā821Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.ā809Updated 11 months ago
- ā457Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.ā762Updated 7 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.ā561Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate ā¦ā632Updated last year
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchā1,241Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.ā44Updated 2 years ago
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.ā272Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)ā1,118Updated last year
- Salesforce open-source LLMs with 8k sequence length.ā716Updated 4 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionā635Updated 10 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".ā795Updated last year
- Research Trends in LLM-guided Multimodal Learning.ā357Updated last year
- Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".ā664Updated 8 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"ā860Updated 3 weeks ago
- Official repo for MM-REACTā949Updated last year
- Multimodal-GPTā1,499Updated last year
- An open-source implementation of Google's PaLM modelsā818Updated 11 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsā743Updated last year