xxxxyu / FlexNN
Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"
☆41Updated last month
Related projects ⓘ
Alternatives and complementary repositories for FlexNN
- ☆16Updated 8 months ago
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆41Updated last year
- A set of examples around MegEngine☆31Updated 11 months ago
- ☆79Updated last year
- ☆59Updated 4 months ago
- Examples of CUDA implementations by Cutlass CuTe☆101Updated last week
- ☆22Updated 7 months ago
- Codes & examples for "CUDA - From Correctness to Performance"☆70Updated last month
- My study note for mlsys☆14Updated 2 weeks ago
- ☆29Updated last year
- play gemm with tvm☆84Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆123Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆32Updated 3 months ago
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆157Updated this week
- Swin Transformer C++ Implementation☆54Updated 3 years ago
- Some common CUDA kernel implementations (Not the fastest).☆14Updated 3 weeks ago
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆40Updated 3 years ago
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- ☆18Updated 3 years ago
- 🐱 ncnn int8 模型量化评估☆12Updated 2 years ago
- 分层解耦的深度学习推理引擎☆60Updated 3 months ago
- ☆144Updated last year
- ☆12Updated this week
- ☆52Updated 2 weeks ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- ☆103Updated 7 months ago
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated last year
- ☆110Updated 2 years ago
- 大规模并行处理器编程实战 第二版答案☆27Updated 2 years ago