bojone / softtopkLinks
differentiable top-k operator
☆22Updated last year
Alternatives and similar repositories for softtopk
Users that are interested in softtopk are comparing it to the libraries listed below
Sorting:
- A torch-based implementation of K-Means and K-Means++☆17Updated 5 years ago
- Triton implement of bi-directional (non-causal) linear attention☆64Updated 11 months ago
- flex-block-attn: an efficient block sparse attention computation library☆107Updated last month
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)☆18Updated last year
- Keras implement of Finite Scalar Quantization☆84Updated 2 years ago
- A repository for DenseSSMs☆88Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Updated last month
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆28Updated 4 months ago
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆77Updated 5 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- ☆201Updated 2 years ago
- ☆104Updated 11 months ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Updated 3 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- The official implementation of the paper "Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation"☆21Updated last year
- WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models☆22Updated 6 months ago
- Benchmarking Attention Mechanism in Vision Transformers.☆19Updated 3 years ago
- ☆107Updated last year
- Document the demo and a series of documents for learning the diffusion model.☆42Updated 2 years ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Updated 4 months ago
- ☆19Updated last year
- ☆32Updated last year
- Official Code for "Rethinking Diffusion Model in High Dimension"☆24Updated 8 months ago
- Mixture of Attention Heads☆51Updated 3 years ago
- A Tight-fisted Optimizer☆50Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆68Updated 2 years ago
- The official repo of continuous speculative decoding☆31Updated 10 months ago
- Explore how to get a VQ-VAE models efficiently!☆67Updated 6 months ago