DeepLink-org / DeepLinkExtLinks
☆13Updated 3 weeks ago
Alternatives and similar repositories for DeepLinkExt
Users that are interested in DeepLinkExt are comparing it to the libraries listed below
Sorting:
- ☆67Updated 7 months ago
- ☆70Updated 7 months ago
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆791Updated 2 weeks ago
- FlagScale is a large model toolkit based on open-sourced projects.☆301Updated this week
- FlagPerf is an open-source software platform for benchmarking AI chips.☆338Updated this week
- ☆334Updated 5 months ago
- ☆139Updated last year
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- ☆148Updated 5 months ago
- ☆21Updated 5 months ago
- ☆122Updated 6 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆573Updated this week
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆392Updated last week
- ☆50Updated last week
- Examples of CUDA implementations by Cutlass CuTe☆195Updated 4 months ago
- Development repository for the Triton-Linalg conversion☆188Updated 4 months ago
- learning how CUDA works☆269Updated 3 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆87Updated last month
- ☆146Updated 5 months ago
- ☆30Updated 2 years ago
- GLake: optimizing GPU memory management and IO transmission.☆467Updated 2 months ago
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆617Updated 2 months ago
- ☆97Updated 2 months ago
- ☆127Updated 5 months ago
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆25Updated last month
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- GEMM by WMMA (tensor core)☆13Updated 2 years ago