xxxxyu / FlexNN
Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"
☆50Updated 3 weeks ago
Alternatives and similar repositories for FlexNN:
Users that are interested in FlexNN are comparing it to the libraries listed below
- ☆59Updated 3 months ago
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆45Updated last year
- ☆57Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆78Updated last month
- ☆59Updated 7 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆175Updated 3 weeks ago
- play gemm with tvm☆87Updated last year
- Implement Flash Attention using Cute.☆69Updated 2 months ago
- CPU Memory Compiler and Parallel programing☆25Updated 3 months ago
- 📚FFPA: Yet another Faster Flash Prefill Attention with O(1)⚡️SRAM complexity for headdim > 256, 1.8x~3x↑🎉faster than SDPA EA.☆106Updated this week
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆100Updated 9 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆102Updated 3 weeks ago
- Examples of CUDA implementations by Cutlass CuTe☆138Updated 2 weeks ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆53Updated 6 months ago
- ☆80Updated last year
- A set of examples around MegEngine☆31Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆88Updated 11 months ago
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆45Updated 3 years ago
- Code Repository of Evaluating Quantized Large Language Models☆116Updated 5 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆52Updated 2 weeks ago
- ☆27Updated last year
- ☆95Updated 3 years ago
- ☆26Updated 10 months ago
- ☆19Updated 3 years ago
- Summary of some awesome work for optimizing LLM inference☆57Updated 2 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆104Updated 5 months ago
- ☆36Updated last month
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆35Updated 6 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆316Updated 5 months ago
- Optimize softmax in triton in many cases☆17Updated 5 months ago