mag- / gpu_benchmark
Gpu benchmark
☆52Updated 3 weeks ago
Alternatives and similar repositories for gpu_benchmark:
Users that are interested in gpu_benchmark are comparing it to the libraries listed below
- Make triton easier☆44Updated 8 months ago
- ☆59Updated last month
- High-Performance SGEMM on CUDA devices☆76Updated last month
- RWKV-7: Surpassing GPT☆79Updated 3 months ago
- ☆27Updated 7 months ago
- ☆75Updated 7 months ago
- ☆49Updated 11 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 7 months ago
- ☆86Updated 11 months ago
- Normalized Transformer (nGPT)☆152Updated 3 months ago
- Learning about CUDA by writing PTX code.☆35Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆116Updated 2 months ago
- train with kittens!☆53Updated 3 months ago
- Focused on fast experimentation and simplicity☆66Updated last month
- Experiment of using Tangent to autodiff triton☆75Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- ring-attention experiments☆123Updated 4 months ago
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- Jax like function transformation engine but micro, microjax☆30Updated 3 months ago
- ☆53Updated last year
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆53Updated 2 months ago
- Train, tune, and infer Bamba model☆84Updated last month
- working implimention of deepseek MLA☆30Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆46Updated last week