JimyMa / FuncTsLinks
[DAC2024] A Holistic Functionalization Approach to Optimizing Imperative Tensor Programs in Deep Learning
☆15Updated last year
Alternatives and similar repositories for FuncTs
Users that are interested in FuncTs are comparing it to the libraries listed below
Sorting:
- ☆142Updated last week
- ☆47Updated last year
- Tile-based language built for AI computation across all scales☆106Updated this week
- Summary of some awesome work for optimizing LLM inference☆151Updated 3 weeks ago
- Open ABI and FFI for Machine Learning Systems☆262Updated this week
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆78Updated 2 weeks ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆51Updated 2 weeks ago
- A lightweight design for computation-communication overlap.☆200Updated 2 months ago
- ☆32Updated last year
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆70Updated 8 months ago
- ☆29Updated 8 months ago
- ☆17Updated 9 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆68Updated 4 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- High performance Transformer implementation in C++.☆146Updated 11 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆302Updated 6 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆150Updated 3 months ago
- ☆40Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆51Updated last week
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆168Updated last year
- tutorials about polyhedral compilation.☆58Updated 2 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆50Updated last year
- ☆18Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆78Updated last month
- Examples of CUDA implementations by Cutlass CuTe☆263Updated 5 months ago
- Flash Attention from Scratch on CUDA Ampere☆102Updated 3 months ago
- ☆15Updated last year
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆72Updated last week
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆40Updated 9 months ago
- Compiler for Dynamic Neural Networks☆46Updated 2 years ago