cli99 / flops-profilerLinks
pytorch-profiler
☆50Updated 2 years ago
Alternatives and similar repositories for flops-profiler
Users that are interested in flops-profiler are comparing it to the libraries listed below
Sorting:
- ☆160Updated 2 years ago
- ☆115Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 7 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆123Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆223Updated 2 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆103Updated 7 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆306Updated this week
- ☆115Updated 7 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Updated 5 months ago
- ☆164Updated last year
- ☆145Updated 11 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆133Updated 2 years ago
- ☆43Updated 3 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆254Updated last year
- ☆83Updated 11 months ago
- ☆168Updated 2 years ago
- High Performance Grouped GEMM in PyTorch☆32Updated 3 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- llama INT4 cuda inference with AWQ☆55Updated 11 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆219Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆192Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 11 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆157Updated last week