YeonwooSung / Pytorch_mixture-of-expertsLinks
PyTorch implementation of moe, which stands for mixture of experts
☆47Updated 4 years ago
Alternatives and similar repositories for Pytorch_mixture-of-experts
Users that are interested in Pytorch_mixture-of-experts are comparing it to the libraries listed below
Sorting:
- Implementation of Infini-Transformer in Pytorch☆111Updated 7 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆77Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- several types of attention modules written in PyTorch for learning purposes☆53Updated 11 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆120Updated 10 months ago
- Model Stock: All we need is just a few fine-tuned models☆122Updated 3 weeks ago
- A repository for DenseSSMs☆88Updated last year
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆67Updated 11 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 8 months ago
- Playground for Transformers☆52Updated last year
- ☆182Updated 11 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated last year
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated 2 years ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆103Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 11 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated last year
- Video descriptions of research papers relating to foundation models and scaling☆31Updated 2 years ago
- Timm model explorer☆41Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆110Updated 3 weeks ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆104Updated last week
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆61Updated 2 years ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 5 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆36Updated 9 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 6 months ago