ScalingIntelligence / good-kernelsLinks
Samples of good AI generated CUDA kernels
☆96Updated 7 months ago
Alternatives and similar repositories for good-kernels
Users that are interested in good-kernels are comparing it to the libraries listed below
Sorting:
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆114Updated last month
- RWKV-7: Surpassing GPT☆102Updated last year
- ☆68Updated 6 months ago
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- ☆161Updated 6 months ago
- Simple high-throughput inference library☆154Updated 7 months ago
- High-Performance SGEMM on CUDA devices☆114Updated 11 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- QuIP quantization☆61Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- LLM Inference on consumer devices☆128Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆252Updated 2 weeks ago
- 👷 Build compute kernels☆196Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆65Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- Ship correct and fast LLM kernels to PyTorch☆127Updated 2 weeks ago
- Work in progress.☆76Updated last month
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆122Updated 2 months ago
- Fast and memory-efficient exact attention☆75Updated 9 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 2 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- ☆219Updated 11 months ago
- Token Omission Via Attention☆128Updated last year
- ring-attention experiments☆160Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Experimental GPU language with meta-programming☆24Updated last year