huggingface / kernel-builderLinks
π· Build compute kernels
β149Updated this week
Alternatives and similar repositories for kernel-builder
Users that are interested in kernel-builder are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hubβ290Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understandβ193Updated 4 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 6 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β280Updated last month
- β221Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!β57Updated last week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β268Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.β185Updated 8 months ago
- Collection of autoregressive model implementationβ86Updated 5 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ296Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β246Updated 8 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top ofβ¦β146Updated last year
- PyTorch Single Controllerβ425Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on diskβ164Updated this week
- Train, tune, and infer Bamba modelβ132Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ130Updated 10 months ago
- β217Updated 8 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPβ123Updated 3 weeks ago
- Google TPU optimizations for transformers modelsβ120Updated 8 months ago
- Where GPUs get cooked π©βπ³π₯β282Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.β195Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMsβ98Updated 3 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ189Updated 3 months ago
- ring-attention experimentsβ152Updated 11 months ago
- Official implementation for Training LLMs with MXFP4β91Updated 5 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β219Updated this week
- Memory optimized Mixture of Expertsβ67Updated 2 months ago
- Normalized Transformer (nGPT)β191Updated 10 months ago
- Work in progress.β74Updated 3 months ago
- Learn CUDA with PyTorchβ84Updated last week