apple / ml-l3mLinks
Large multi-modal models (L3M) pre-training.
☆219Updated last month
Alternatives and similar repositories for ml-l3m
Users that are interested in ml-l3m are comparing it to the libraries listed below
Sorting:
- Simple & Scalable Pretraining for Neural Architecture Research☆298Updated last week
- Train, tune, and infer Bamba model☆135Updated 5 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last month
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆119Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆111Updated last month
- RLP: Reinforcement as a Pretraining Objective☆198Updated last month
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆137Updated 3 weeks ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- ☆120Updated last month
- ICLR 2025 - official implementation for "I-Con: A Unifying Framework for Representation Learning"☆117Updated 4 months ago
- ☆56Updated last year
- ☆302Updated 6 months ago
- ☆201Updated 10 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆203Updated this week
- MatFormer repo☆64Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆145Updated this week
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated last month
- Getting crystal-like representations with harmonic loss☆192Updated 7 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆239Updated last week
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆123Updated 6 months ago
- Flash Attention Triton kernel with support for second-order derivatives☆107Updated 2 weeks ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆117Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 8 months ago
- 📄Small Batch Size Training for Language Models☆63Updated last month
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- ☆57Updated last month
- Normalized Transformer (nGPT)☆192Updated 11 months ago
- Esoteric Language Models☆104Updated last month
- H-Net Dynamic Hierarchical Architecture☆80Updated last month