ScalingIntelligence / good-kernelsLinks
Samples of good AI generated CUDA kernels
☆99Updated 8 months ago
Alternatives and similar repositories for good-kernels
Users that are interested in good-kernels are comparing it to the libraries listed below
Sorting:
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆118Updated last month
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- ☆71Updated 7 months ago
- ☆163Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated last week
- RWKV-7: Surpassing GPT☆104Updated last year
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- Simple high-throughput inference library☆155Updated 8 months ago
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- ☆219Updated last year
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated this week
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆417Updated last month
- Work in progress.☆79Updated 2 months ago
- Our first fully AI generated deep learning system☆481Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆117Updated last week
- ring-attention experiments☆165Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆252Updated last year
- QuIP quantization☆61Updated last year
- LLM Inference on consumer devices☆129Updated 10 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆260Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆238Updated this week
- 👷 Build compute kernels☆215Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆141Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆387Updated 9 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago