huggingface / kernel-builderLinks
๐ท Build compute kernels
โ87Updated this week
Alternatives and similar repositories for kernel-builder
Users that are interested in kernel-builder are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hubโ220Updated this week
- Collection of autoregressive model implementationโ86Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersโ127Updated 8 months ago
- train with kittens!โ61Updated 9 months ago
- Train, tune, and infer Bamba modelโ130Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS โฆโ61Updated 9 months ago
- Google TPU optimizations for transformers modelsโ117Updated 6 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)โ66Updated 4 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPโ98Updated 2 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyersโ68Updated 3 months ago
- โ60Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language modelsโ81Updated 2 months ago
- DPO, but faster ๐โ43Updated 7 months ago
- PyTorch implementation of models from the Zamba2 series.โ184Updated 6 months ago
- โ76Updated last month
- vLLM adapter for a TGIS-compatible gRPC server.โ33Updated this week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top ofโฆโ138Updated 11 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskโ141Updated last week
- Storing long contexts in tiny caches with self-studyโ121Updated this week
- Memory optimized Mixture of Expertsโ51Updated last week
- Make triton easierโ47Updated last year
- The evaluation framework for training-free sparse attention in LLMsโ86Updated last month
- โ12Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!โ48Updated this week
- Simple & Scalable Pretraining for Neural Architecture Researchโ277Updated last week
- RWKV-7: Surpassing GPTโ94Updated 8 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"โ244Updated 6 months ago
- โ114Updated last year
- โ83Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)โ103Updated 4 months ago