☆23Aug 14, 2024Updated last year
Alternatives and similar repositories for cute_gemm
Users that are interested in cute_gemm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 18, 2026Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 7 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Sep 15, 2023Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- ☆52May 19, 2025Updated 10 months ago
- ☆14Nov 3, 2025Updated 4 months ago
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- ☆10Apr 24, 2023Updated 2 years ago
- ☆169Feb 5, 2026Updated last month
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- ☆65Apr 26, 2025Updated 11 months ago
- Improved Secure 3-Party Neural Network Inference with Reducing Online Communication Costs☆11Jan 27, 2023Updated 3 years ago
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- Optimize softmax in triton in many cases☆23Sep 6, 2024Updated last year
- ☆16Jul 20, 2023Updated 2 years ago
- TensorRT encapsulation, learn, rewrite, practice.☆29Oct 19, 2022Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- ☆119May 16, 2025Updated 10 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆26Feb 17, 2025Updated last year
- A direct convolution library targeting ARM multi-core CPUs.☆12Nov 27, 2024Updated last year
- CPU Memory Compiler and Parallel programing☆26Nov 18, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- CUTLASS and CuTe Examples☆134Nov 30, 2025Updated 3 months ago
- ☆11Jul 20, 2023Updated 2 years ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models☆29Feb 12, 2026Updated last month
- ☆14Jul 24, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A simple high performance CUDA GEMM implementation.☆428Jan 4, 2024Updated 2 years ago
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated 2 years ago
- ☆33Mar 31, 2025Updated 11 months ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆11Jul 27, 2024Updated last year
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- LLM-powered Python☆15Mar 19, 2026Updated last week
- pytorch版基于gpt+nezha的中文多轮Cdial☆11Oct 22, 2022Updated 3 years ago