brain-bzh / PEFSLLinks
☆12Updated 7 months ago
Alternatives and similar repositories for PEFSL
Users that are interested in PEFSL are comparing it to the libraries listed below
Sorting:
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆102Updated last year
- A Plug-and-play Lightweight tool for the Inference Optimization of Deep Neural networks☆47Updated 3 months ago
- ☆11Updated 2 years ago
- FPGA based Vision Transformer accelerator (Harvard CS205)☆149Updated 11 months ago
- ☆32Updated 2 weeks ago
- ☆15Updated 10 months ago
- PYNQ-Torch: a framework to develop PyTorch accelerators on the PYNQ platform☆76Updated 5 years ago
- [CVPR 2025 Highlight] FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation☆25Updated 7 months ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆132Updated last year
- Implementation of Microscaling data formats in SystemVerilog.☆29Updated 7 months ago
- A collection of tutorials for the fpgaConvNet framework.☆49Updated last year
- From Pytorch model to C++ for Vitis HLS☆20Updated this week
- High Granularity Quantizarion for Ultra-Fast Machine Learning Applications on FPGAs☆39Updated 6 months ago
- Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices (ACML 2023)☆16Updated last year
- Low-Precision YOLO on PYNQ with FINN☆34Updated 2 years ago
- ☆14Updated 3 years ago
- ☆46Updated 2 years ago
- C++ code for HLS FPGA implementation of transformer☆20Updated last year
- Torch2Chip (MLSys, 2024)☆55Updated 10 months ago
- ☆11Updated 3 years ago
- https://www.hackster.io/Altaga/facemask-detector-f0c10f☆13Updated 3 years ago
- Open-source of MSD framework☆16Updated 2 years ago
- ☆18Updated 8 months ago
- FPGA-based hardware accelerator for Vision Transformer (ViT), with Hybrid-Grained Pipeline.☆124Updated last year
- Chameleon: A MatMul-Free TCN Accelerator for End-to-End Few-Shot and Continual Learning from Sequential Data☆25Updated 8 months ago
- [HPCA 2026 Best Paper Candidate] Official implementation of "Focus: A Streaming Concentration Architecture for Efficient Vision-Language …☆29Updated this week
- DeiT implementation for Q-ViT☆25Updated 9 months ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆127Updated 2 years ago
- ☆62Updated 3 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆175Updated this week