thevasudevgupta / gpt-triton
Triton implementation of GPT/LLAMA
☆16Updated 5 months ago
Alternatives and similar repositories for gpt-triton:
Users that are interested in gpt-triton are comparing it to the libraries listed below
- ☆125Updated last month
- ☆132Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆117Updated last year
- Cataloging released Triton kernels.☆164Updated last month
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated 10 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆56Updated 3 weeks ago
- ☆75Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆17Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆221Updated this week
- ☆175Updated this week
- ring-attention experiments☆123Updated 3 months ago
- ☆141Updated last year
- Understand and test language model architectures on synthetic tasks.☆181Updated last month
- Code for studying the super weight in LLM☆79Updated 2 months ago
- Prune transformer layers☆67Updated 8 months ago
- Muon optimizer: +~30% sample efficiency with <3% wallclock overhead☆251Updated last week
- Collection of autoregressive model implementation☆81Updated this week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆228Updated this week
- Mixed precision training from scratch with Tensors and CUDA☆21Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆223Updated this week
- Learn CUDA with PyTorch☆16Updated 2 weeks ago
- Fast low-bit matmul kernels in Triton☆231Updated this week
- Collection of kernels written in Triton language☆97Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 6 months ago
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆33Updated 9 months ago
- ☆88Updated 8 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆152Updated last month