Aleph-Alpha / Alpha-MoELinks
☆44Updated last month
Alternatives and similar repositories for Alpha-MoE
Users that are interested in Alpha-MoE are comparing it to the libraries listed below
Sorting:
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆148Updated this week
- ☆153Updated last year
- ☆100Updated last year
- extensible collectives library in triton☆91Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆312Updated 4 months ago
- Github mirror of trition-lang/triton repo.☆119Updated this week
- ☆270Updated this week
- Collection of kernels written in Triton language☆174Updated 9 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆308Updated this week
- Helpful kernel tutorials and examples for tile-based GPU programming☆554Updated this week
- Cataloging released Triton kernels.☆282Updated 4 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆246Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆418Updated 3 weeks ago
- Fastest kernels written from scratch☆517Updated 3 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆433Updated last week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆706Updated this week
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆83Updated 3 months ago
- ☆255Updated last year
- An experimental CPU backend for Triton☆168Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 5 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 4 months ago
- torchcomms: a modern PyTorch communications API