transmuteAI / trailmet
Transmute AI Lab Model Efficiency Toolkit
☆19Updated last year
Alternatives and similar repositories for trailmet:
Users that are interested in trailmet are comparing it to the libraries listed below
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆37Updated 3 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 5 months ago
- ☆40Updated 9 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆28Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆23Updated 7 months ago
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆31Updated 2 weeks ago
- Code for studying the super weight in LLM☆73Updated last month
- Utilities for PyTorch distributed☆23Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 5 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆42Updated 6 months ago
- Minimal Implementation of Visual Autoregressive Modelling (VAR)☆24Updated 3 weeks ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆48Updated 3 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆38Updated 11 months ago
- Prune transformer layers☆67Updated 8 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- some mixture of experts architecture implementations☆12Updated 10 months ago
- Utilities for Training Very Large Models☆57Updated 4 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated last week
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆50Updated 5 months ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆33Updated 2 months ago
- ☆41Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆50Updated 9 months ago
- Contains materials for my talk "You don't know TensorFlow".☆9Updated last year
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- Unofficial Implementation of Selective Attention Transformer☆14Updated 3 months ago
- Collection of autoregressive model implementation☆78Updated 3 weeks ago
- ☆66Updated 6 months ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆11Updated last month