facebookresearch / Mixture-of-Transformers
Mixture-of-Transformers A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025. π https//arxiv.org/abs/2411.04996
β31Updated this week
Alternatives and similar repositories for Mixture-of-Transformers
Users that are interested in Mixture-of-Transformers are comparing it to the libraries listed below
Sorting:
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptationβ39Updated 6 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignmentβ57Updated 8 months ago
- β24Updated 3 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Schedulingβ29Updated last month
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.β29Updated last month
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]β19Updated 2 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"β59Updated 5 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Modeβ¦β41Updated 3 weeks ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agentsβ36Updated 2 weeks ago
- Triton Implementation of HyperAttention Algorithmβ48Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)β30Updated 2 months ago
- Exploration of automated dataset selection approaches at large scales.β40Updated 2 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)β30Updated last year
- β31Updated 4 months ago
- β29Updated last year
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectoriesβ12Updated 3 weeks ago
- Aioli: A unified optimization framework for language model data mixingβ25Updated 3 months ago
- Utilities for Training Very Large Modelsβ58Updated 7 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMsβ84Updated 5 months ago
- PyTorch library for Active Fine-Tuningβ72Updated 2 months ago
- β72Updated last year
- β19Updated 10 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methodsβ77Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"β72Updated 6 months ago
- Simple and scalable tools for data-driven pretraining data selection.β23Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLoreβ27Updated 7 months ago
- β68Updated 10 months ago
- β17Updated 3 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Ayaβ108Updated 2 months ago
- ReBase: Training Task Experts through Retrieval Based Distillationβ29Updated 3 months ago