Ther-nullptr / Awesome-Transformer-AcclerationLinks
Paper list for accleration of transformers
☆13Updated 2 years ago
Alternatives and similar repositories for Awesome-Transformer-Accleration
Users that are interested in Awesome-Transformer-Accleration are comparing it to the libraries listed below
Sorting:
- ☆28Updated last month
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Updated 2 years ago
- ☆14Updated last week
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- ☆21Updated 3 years ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆33Updated last year
- DOSA: Differentiable Model-Based One-Loop Search for DNN Accelerators☆18Updated last year
- Binary Neural Network-based COVID-19 Face-Mask Wear and Positioning Predictor on Edge Devices☆12Updated 4 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆32Updated 9 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 11 months ago
- TileFlow is a performance analysis tool based on Timeloop for fusion dataflows☆62Updated last year
- ☆14Updated 4 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆11Updated 2 years ago
- ☆13Updated 4 years ago
- ☆14Updated 3 years ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆26Updated 11 months ago
- Optimize tensor program fast with Felix, a gradient descent autotuner.☆28Updated last year
- ☆19Updated 4 years ago
- Repository for artifact evaluation of ASPLOS 2023 paper "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"☆26Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- ☆39Updated 5 years ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆29Updated 10 months ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆38Updated 7 months ago
- agile hardware-software co-design☆52Updated 3 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- Domain-Specific Architecture Generator 2☆21Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Updated 2 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 6 months ago
- ☆23Updated 2 years ago