huggingface / kernel-builderLinks
π· Build compute kernels
β190Updated this week
Alternatives and similar repositories for kernel-builder
Users that are interested in kernel-builder are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hubβ337Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β313Updated last month
- β224Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understandβ196Updated 6 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPβ138Updated 2 months ago
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β271Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 8 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ302Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on diskβ210Updated 2 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face supportβ187Updated last week
- Google TPU optimizations for transformers modelsβ123Updated 10 months ago
- β219Updated 10 months ago
- ring-attention experimentsβ160Updated last year
- Ship correct and fast LLM kernels to PyTorchβ124Updated 2 weeks ago
- Write a fast kernel and run it on Discord. See how you compare against the best!β61Updated this week
- β110Updated 2 weeks ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β254Updated last week
- vLLM adapter for a TGIS-compatible gRPC server.β45Updated this week
- Quantized LLM training in pure CUDA/C++.β220Updated this week
- β90Updated last year
- Memory optimized Mixture of Expertsβ69Updated 4 months ago
- An extension of the nanoGPT repository for training small MOE models.β215Updated 8 months ago
- Where GPUs get cooked π©βπ³π₯β319Updated 2 months ago
- Official implementation for Training LLMs with MXFP4β110Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ130Updated last year
- Utils for Unsloth https://github.com/unslothai/unslothβ177Updated this week
- Fast low-bit matmul kernels in Tritonβ401Updated last week
- Learn CUDA with PyTorchβ117Updated last week
- Train, tune, and infer Bamba modelβ136Updated 6 months ago
- β63Updated 5 months ago