cli99 / flops-profilerLinks
pytorch-profiler
☆51Updated 2 years ago
Alternatives and similar repositories for flops-profiler
Users that are interested in flops-profiler are comparing it to the libraries listed below
Sorting:
- ☆157Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last week
- ☆73Updated 4 months ago
- ☆105Updated 9 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆22Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- ☆42Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- ☆49Updated 10 months ago
- This repository contains integer operators on GPUs for PyTorch.☆205Updated last year
- ☆86Updated 5 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- ☆146Updated 10 months ago
- ☆208Updated 10 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 9 months ago
- ☆74Updated 4 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆109Updated 8 months ago
- ☆96Updated 8 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- DeeperGEMM: crazy optimized version☆69Updated last month
- High Performance Grouped GEMM in PyTorch☆30Updated 3 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆28Updated 4 years ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- 16-fold memory access reduction with nearly no loss☆94Updated 2 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆195Updated last year