xxxxyu / FlexNNLinks
Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"
☆53Updated 5 months ago
Alternatives and similar repositories for FlexNN
Users that are interested in FlexNN are comparing it to the libraries listed below
Sorting:
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆48Updated 2 years ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆48Updated 2 months ago
- SGEMM optimization with cuda step by step☆19Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆54Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- play gemm with tvm☆91Updated last year
- Implement custom operators in PyTorch with cuda/c++☆63Updated 2 years ago
- ☆69Updated 7 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆54Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆80Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆58Updated 7 months ago
- Summary of some awesome work for optimizing LLM inference☆77Updated 2 weeks ago
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- 分层解耦的深度学习推理引擎☆73Updated 4 months ago
- A set of examples around MegEngine☆31Updated last year
- ☆21Updated 4 years ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆47Updated last week
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 4 months ago
- Code Repository of Evaluating Quantized Large Language Models☆124Updated 9 months ago
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆137Updated last month
- Post-Training Quantization for Vision transformers.☆219Updated 2 years ago
- This repository is a read-only mirror of https://gitlab.arm.com/kleidi/kleidiai☆51Updated this week
- ☆75Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated last week
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- ☆149Updated 2 years ago
- ☆146Updated 5 months ago