YconquestY / Needle
Imperative deep learning framework with customized GPU and CPU backend
☆28Updated last year
Related projects: ⓘ
- flash attention tutorial written in python, triton, cuda, cutlass☆159Updated 3 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆153Updated this week
- ☆134Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆118Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆265Updated this week
- High performance Transformer implementation in C++.☆67Updated this week
- Examples and exercises from the book Programming Massively Parallel Processors - A Hands-on Approach. David B. Kirk and Wen-mei W. Hwu (T…☆33Updated 3 years ago
- Code base and slides for ECE408:Applied Parallel Programming On GPU.☆113Updated 3 years ago
- learning how CUDA works☆150Updated last month
- ☆21Updated 4 months ago
- CUDA Matrix Multiplication Optimization☆118Updated 2 months ago
- Collection of kernels written in Triton language☆48Updated 2 weeks ago
- A low-latency & high-throughput serving engine for LLMs☆174Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆186Updated last month
- ☆67Updated last week
- Summary of some awesome work for optimizing LLM inference☆26Updated this week
- ☆188Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆84Updated 2 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆20Updated last week
- Codes & examples for "CUDA - From Correctness to Performance"☆45Updated 2 weeks ago
- ☆138Updated 2 months ago
- ring-attention experiments☆89Updated 5 months ago
- Cataloging released Triton kernels.☆111Updated 3 weeks ago
- A PyTorch-like deep learning framework. Just for fun.☆128Updated 11 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆173Updated 3 months ago
- Learning material for CMU10-714: Deep Learning System☆201Updated 4 months ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆108Updated 9 months ago
- ☆60Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆70Updated last month
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆40Updated last week