RussWong / LLM-engineeringLinks
☆26Updated 6 months ago
Alternatives and similar repositories for LLM-engineering
Users that are interested in LLM-engineering are comparing it to the libraries listed below
Sorting:
- ☆145Updated last year
- A tutorial for CUDA&PyTorch☆227Updated last week
- Examples of CUDA implementations by Cutlass CuTe☆270Updated 7 months ago
- A CUDA tutorial to make people learn CUDA program from 0☆266Updated last year
- ☆161Updated 2 months ago
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Updated 4 months ago
- A light llama-like llm inference framework based on the triton kernel.☆171Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- ☆152Updated last year
- learning how CUDA works☆373Updated 11 months ago
- A simple high performance CUDA GEMM implementation.☆426Updated 2 years ago
- ☆158Updated last year
- ☆285Updated last week
- ☆40Updated 8 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆404Updated this week
- ☆70Updated last year
- From Minimal GEMM to Everything☆101Updated last month
- Yinghan's Code Sample☆365Updated 3 years ago
- ☆49Updated last year
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆407Updated last year
- ☆113Updated 8 months ago
- ☆118Updated 10 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆115Updated 6 months ago
- ☆120Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 8 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆520Updated last year
- Optimize softmax in triton in many cases☆22Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆484Updated 2 weeks ago
- ☆60Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆54Updated last year