YconquestY / NeedleLinks
Imperative deep learning framework with customized GPU and CPU backend
☆30Updated last year
Alternatives and similar repositories for Needle
Users that are interested in Needle are comparing it to the libraries listed below
Sorting:
- ☆87Updated 3 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- Puzzles for learning Triton, play it with minimal environment configuration!☆367Updated 6 months ago
- Cataloging released Triton kernels.☆238Updated 5 months ago
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- A PyTorch-like deep learning framework. Just for fun.☆157Updated last year
- ☆170Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- Code release for book "Efficient Training in PyTorch"☆69Updated 2 months ago
- Learning material for CMU10-714: Deep Learning System☆256Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆397Updated 3 weeks ago
- A lightweight design for computation-communication overlap.☆143Updated last week
- ☆48Updated last month
- Codes & examples for "CUDA - From Correctness to Performance"☆100Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆80Updated last month
- ring-attention experiments☆144Updated 8 months ago
- Examples and exercises from the book Programming Massively Parallel Processors - A Hands-on Approach. David B. Kirk and Wen-mei W. Hwu (T…☆69Updated 4 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆92Updated 10 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆377Updated last month
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- A simple calculation for LLM MFU.☆38Updated 3 months ago
- Optimize softmax in triton in many cases☆21Updated 9 months ago
- ☆212Updated 11 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆364Updated 9 months ago
- A minimal implementation of vllm.☆44Updated 11 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆255Updated 3 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆224Updated 2 weeks ago
- Summary of some awesome work for optimizing LLM inference☆77Updated 3 weeks ago