ELS-RD / kernlLinks
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
☆1,584Updated last year
Alternatives and similar repositories for kernl
Users that are interested in kernl are comparing it to the libraries listed below
Sorting:
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,063Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated 11 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,349Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,063Updated 3 months ago
- Library for 8-bit optimizers and quantization routines.☆780Updated 3 years ago
- Pipeline Parallelism for PyTorch☆781Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,006Updated last year
- PyTorch extensions for high performance and large scale training.☆3,376Updated 5 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆731Updated last month
- maximal update parametrization (µP)☆1,605Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,755Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,679Updated 2 weeks ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,192Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆873Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,835Updated last month
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,512Updated last year
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.☆1,227Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,422Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆926Updated 3 weeks ago
- Training and serving large-scale neural networks with auto parallelization.☆3,157Updated last year
- Fast Inference Solutions for BLOOM☆565Updated 11 months ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆790Updated 2 years ago
- ☆412Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆974Updated last year
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,104Updated last week
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,010Updated last year
- Foundation Architecture for (M)LLMs☆3,117Updated last year
- Automatically split your PyTorch models on multiple GPUs for training & inference☆655Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,282Updated 7 months ago
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆874Updated this week