huggingface / kernel-builderLinks
π· Build compute kernels
β136Updated this week
Alternatives and similar repositories for kernel-builder
Users that are interested in kernel-builder are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hubβ271Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 5 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ157Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understandβ193Updated 3 months ago
- Google TPU optimizations for transformers modelsβ120Updated 7 months ago
- β217Updated 7 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPβ120Updated this week
- Train, tune, and infer Bamba modelβ131Updated 3 months ago
- β216Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMsβ91Updated 2 months ago
- Simple high-throughput inference libraryβ127Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ129Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.β184Updated 7 months ago
- PyTorch Single Controllerβ414Updated this week
- Inference server benchmarking toolβ98Updated 4 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!β55Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"β155Updated 10 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ291Updated 3 weeks ago
- Collection of autoregressive model implementationβ86Updated 4 months ago
- Official implementation for Training LLMs with MXFP4β87Updated 4 months ago
- β92Updated 3 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β245Updated 7 months ago
- vLLM adapter for a TGIS-compatible gRPC server.β39Updated this week
- ring-attention experimentsβ150Updated 10 months ago
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β265Updated last month
- Lightweight toolkit package to train and fine-tune 1.58bit Language modelsβ88Updated 3 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β269Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS β¦β60Updated 11 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β208Updated last week
- train with kittens!β62Updated 10 months ago