zugexiaodui / torch_flopsLinks
A library for calculating the FLOPs in the forward() process based on torch.fx
☆128Updated 6 months ago
Alternatives and similar repositories for torch_flops
Users that are interested in torch_flops are comparing it to the libraries listed below
Sorting:
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆123Updated 3 weeks ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆140Updated 2 years ago
- ☆197Updated last year
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆96Updated last year
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆202Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆328Updated 7 months ago
- Fast Multi-dimensional Sparse Attention☆625Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆234Updated 2 months ago
- Curated list of methods that focuses on improving the efficiency of diffusion models☆45Updated last year
- The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer…☆879Updated last year
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆605Updated last month
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆74Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆308Updated 2 weeks ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆394Updated last week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆120Updated 6 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆373Updated last week
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆350Updated 2 years ago
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆33Updated 10 months ago
- Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.☆133Updated 3 years ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆63Updated last year
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆11Updated 10 months ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆402Updated 9 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆168Updated last year
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆14Updated 7 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆77Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆330Updated 9 months ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
- Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket☆70Updated 2 years ago