JimyMa / FuncTsLinks
[DAC2024] A Holistic Functionalization Approach to Optimizing Imperative Tensor Programs in Deep Learning
☆15Updated last year
Alternatives and similar repositories for FuncTs
Users that are interested in FuncTs are comparing it to the libraries listed below
Sorting:
- ☆140Updated this week
- Summary of some awesome work for optimizing LLM inference☆139Updated this week
- ☆45Updated last year
- ☆32Updated last year
- ☆29Updated 8 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆47Updated 11 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆202Updated this week
- ☆209Updated last month
- DeepSeek-V3/R1 inference performance simulator☆169Updated 8 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆292Updated 5 months ago
- Open ABI and FFI for Machine Learning Systems☆211Updated this week
- Flash Attention from Scratch on CUDA Ampere☆84Updated 3 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆67Updated 3 months ago
- Tile-based language built for AI computation across all scales☆82Updated this week
- WaferLLM: Large Language Model Inference at Wafer Scale☆76Updated last month
- ☆15Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆50Updated 2 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆394Updated last month
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated last week
- Building the Virtuous Cycle for AI-driven LLM Systems☆93Updated this week
- ☆90Updated 8 months ago
- ☆18Updated last year
- A lightweight design for computation-communication overlap.☆190Updated last month
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆164Updated last year
- A torch compile backend for multi-targets☆40Updated this week
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆67Updated 7 months ago
- tutorials about polyhedral compilation.☆58Updated last month
- Code release for AdapMoE accepted by ICCAD 2024☆34Updated 7 months ago
- High performance Transformer implementation in C++.☆142Updated 10 months ago
- ☆16Updated 9 months ago