transmuteAI / trailmetLinks
Transmute AI Lab Model Efficiency Toolkit
☆19Updated 2 years ago
Alternatives and similar repositories for trailmet
Users that are interested in trailmet are comparing it to the libraries listed below
Sorting:
- Fork of Flame repo for training of some new stuff in development☆19Updated last month
- Official implementation of ECCV24 paper: POA☆24Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆47Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Updated last year
- ☆91Updated last year
- KV Cache Steering for Inducing Reasoning in Small Language Models☆46Updated 6 months ago
- ☆42Updated last year
- Prune transformer layers☆74Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆40Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- JAX Scalify: end-to-end scaled arithmetics☆18Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- Everything you need to reproduce "Better plain ViT baselines for ImageNet-1k" in PyTorch, and more☆12Updated last week
- A repository for research on medium sized language models.☆77Updated last year
- ☆71Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 9 months ago
- Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."☆53Updated 4 months ago
- Implementation of the Mamba SSM with hf_integration.☆55Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Work in progress.☆79Updated 2 months ago
- ☆47Updated 2 years ago
- ☆67Updated 10 months ago
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- ☆35Updated last year