huggingface / kernel-builderLinks
π· Build compute kernels
β195Updated this week
Alternatives and similar repositories for kernel-builder
Users that are interested in kernel-builder are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hubβ352Updated last week
- Google TPU optimizations for transformers modelsβ131Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 9 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β327Updated last month
- Write a fast kernel and run it on Discord. See how you compare against the best!β64Updated this week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPβ141Updated 3 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face supportβ209Updated this week
- β219Updated 11 months ago
- β225Updated last month
- Ship correct and fast LLM kernels to PyTorchβ126Updated this week
- Where GPUs get cooked π©βπ³π₯β339Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ305Updated 2 weeks ago
- Memory optimized Mixture of Expertsβ72Updated 4 months ago
- MoE training for Me and You and maybe other peopleβ239Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understandβ195Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on diskβ220Updated last week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β275Updated last month
- β113Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β261Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β249Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.β186Updated 11 months ago
- vLLM adapter for a TGIS-compatible gRPC server.β45Updated this week
- Train, tune, and infer Bamba modelβ137Updated 6 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language modelsβ103Updated 7 months ago
- ring-attention experimentsβ160Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.β219Updated 9 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.β199Updated this week
- LM engine is a library for pretraining/finetuning LLMsβ102Updated this week
- Utils for Unsloth https://github.com/unslothai/unslothβ183Updated this week