Aleph-Alpha / Alpha-MoELinks
☆47Updated 2 months ago
Alternatives and similar repositories for Alpha-MoE
Users that are interested in Alpha-MoE are comparing it to the libraries listed below
Sorting:
- ☆104Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆324Updated this week
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- extensible collectives library in triton☆95Updated 10 months ago
- FlashInfer Bench @ MLSys 2026: Building AI agents to write high performance GPU kernels☆84Updated 2 weeks ago
- ☆159Updated last year
- Fast low-bit matmul kernels in Triton☆427Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆251Updated 9 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆103Updated 4 months ago
- Cataloging released Triton kernels.☆292Updated 5 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆168Updated this week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- Collection of kernels written in Triton language☆178Updated 2 weeks ago
- Automatic differentiation for Triton Kernels☆29Updated 5 months ago
- Github mirror of trition-lang/triton repo.☆128Updated this week
- ☆286Updated last week
- ☆259Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆96Updated 4 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- ☆60Updated last week
- DeeperGEMM: crazy optimized version☆73Updated 9 months ago
- Fastest kernels written from scratch☆532Updated 4 months ago
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆228Updated this week
- CUTLASS and CuTe Examples☆127Updated 2 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆462Updated last month
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆739Updated this week
- Tile-based language built for AI computation across all scales☆120Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago