mag- / gpu_benchmarkLinks
Gpu benchmark
☆74Updated last year
Alternatives and similar repositories for gpu_benchmark
Users that are interested in gpu_benchmark are comparing it to the libraries listed below
Sorting:
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- ☆92Updated last year
- ☆71Updated 7 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- A collection of tricks and tools to speed up transformer models☆194Updated last month
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- RWKV-7: Surpassing GPT☆104Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆238Updated this week
- ring-attention experiments☆165Updated last year
- ☆66Updated 10 months ago
- Experimental GPU language with meta-programming☆24Updated last year
- train with kittens!☆63Updated last year
- ☆18Updated last year
- Inference of Mamba and Mamba2 models in pure C☆196Updated 2 weeks ago
- We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, i…☆47Updated this week
- Make triton easier☆50Updated last year
- ☆79Updated last year
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- QuIP quantization☆61Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 11 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆78Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆92Updated last year
- Simple high-throughput inference library☆155Updated 8 months ago
- Learning about CUDA by writing PTX code.☆152Updated last year
- A bunch of kernels that might make stuff slower 😉☆75Updated last week
- RWKV in nanoGPT style☆197Updated last year
- ☆178Updated 2 years ago