mlfoundations / MINT-1T
MINT-1T: A one trillion token multimodal interleaved dataset.
☆770Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for MINT-1T
- Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation☆913Updated last week
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,117Updated this week
- Open weights language model from Google DeepMind, based on Griffin.☆606Updated 4 months ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆838Updated 3 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆801Updated 2 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆676Updated 3 months ago
- Reference implementation of Megalodon 7B model☆504Updated 6 months ago
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆824Updated 6 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆350Updated 3 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,333Updated 6 months ago
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆593Updated last week
- OLMoE: Open Mixture-of-Experts Language Models☆435Updated this week
- Yi-1.5 is an upgraded version of Yi, delivering stronger performance in coding, math, reasoning, and instruction-following capability.☆513Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆642Updated last month
- nanoGPT style version of Llama 3.1☆1,231Updated 3 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆703Updated 9 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆312Updated 5 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,245Updated 2 weeks ago
- A Framework of Small-scale Large Multimodal Models☆635Updated 3 weeks ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,385Updated 8 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆960Updated 3 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,000Updated 9 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,823Updated 3 months ago
- DataComp for Language Models☆1,150Updated 2 weeks ago
- ☆916Updated this week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆776Updated this week
- [ACL 2024] Progressive LLaMA with Block Expansion.☆479Updated 5 months ago
- Large Reasoning Models☆457Updated this week