facebookresearch / RLCompOptLinks
Learning Compiler Pass Orders using Coreset and Normalized Value Prediction. (ICML 2023)
☆20Updated 2 years ago
Alternatives and similar repositories for RLCompOpt
Users that are interested in RLCompOpt are comparing it to the libraries listed below
Sorting:
- ☆160Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Implementation of a Transformer, but completely in Triton☆279Updated 3 years ago
- ☆115Updated last year
- Memory Optimizations for Deep Learning (ICML 2023)☆115Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆324Updated this week
- Collection of kernels written in Triton language☆178Updated last week
- ☆61Updated 2 years ago
- ring-attention experiments☆165Updated last year
- ☆178Updated 2 years ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Updated last year
- A block oriented training approach for inference time optimization.☆34Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Updated last week
- ☆157Updated 2 years ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- ☆77Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- ☆222Updated last year
- extensible collectives library in triton☆95Updated 10 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆113Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- An open-source efficient deep learning framework/compiler, written in python.☆740Updated 5 months ago
- GPTQ inference Triton kernel☆321Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- ☆118Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago