SteveTsui / Q-DETRLinks
☆34Updated last year
Alternatives and similar repositories for Q-DETR
Users that are interested in Q-DETR are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆78Updated last year
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- The official implementation of the AAAI 2024 paper Bi-ViT.☆10Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"☆32Updated last year
- [ICLR 2025] Official PyTorch implementation of "DECO: Query-Based End-to-End Object Detection with ConvNets"☆53Updated 5 months ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆72Updated last year
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆29Updated 7 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- ☆25Updated 3 years ago
- [ICCV 2023] Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment☆43Updated last year
- [ECCV 2024] Isomorphic Pruning for Vision Models☆70Updated 11 months ago
- RepNeXt: A Fast Multi-Scale CNN using Structural Reparameterization☆42Updated 9 months ago
- [CVPR 2023] Official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR"☆197Updated 2 years ago
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆106Updated 10 months ago
- [ICCV2023] DETRDistill: A Universal Knowledge Distillation Framework for DETR-families☆57Updated last year
- ☆46Updated last year
- ☆11Updated 2 years ago
- ☆12Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆83Updated last year
- ☆66Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆24Updated 7 months ago
- [ICCV'23] Cascade-DETR: Delving into High-Quality Universal Object Detection☆98Updated last year
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆69Updated last year
- ☆51Updated 10 months ago
- Implementation of Enhancing Your Trained DETRs with Box Refinement☆59Updated last year
- ☆52Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆69Updated 2 years ago