triton-lang / triton
Development repository for the Triton language and compiler
☆12,698Updated this week
Related projects: ⓘ
- Fast and memory-efficient exact attention☆13,401Updated this week
- Ongoing research training transformer models at scale☆9,949Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆7,687Updated this week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆11,519Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆14,117Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆29,930Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆19,845Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆26,822Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆19,545Updated 3 weeks ago
- Inference Llama 2 in one file of pure C☆17,153Updated last month
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,037Updated last week
- Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.☆27,963Updated this week
- Open standard for machine learning interoperability☆17,638Updated this week
- Transformer related optimization, including BERT, GPT☆5,773Updated 5 months ago
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆13,233Updated last month
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆10,327Updated last month
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆15,839Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆8,351Updated this week
- Train transformer language models with reinforcement learning.☆9,288Updated this week
- Tensor library for machine learning☆10,846Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆6,029Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆11,602Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆34,719Updated this week
- Flax is a neural network library for JAX that is designed for flexibility.☆5,950Updated this week
- Repo for external large-scale work☆6,452Updated 4 months ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆10,552Updated last week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain…☆8,186Updated last week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,531Updated this week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,397Updated 2 weeks ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆30,165Updated last week