array2d / deepxLinks
Large-scale Auto-Distributed Training/Inference Unified Framework | Memory-Compute-Control Decoupled Architecture | Multi-language SDK & Heterogeneous Hardware Support
☆52Updated this week
Alternatives and similar repositories for deepx
Users that are interested in deepx are comparing it to the libraries listed below
Sorting:
- easy cuda code☆78Updated 6 months ago
- 分层解耦的深度学习推理引擎☆73Updated 5 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆75Updated 3 months ago
- A light llama-like llm inference framework based on the triton kernel.☆135Updated this week
- ☆80Updated this week
- ☆240Updated last month
- ☆118Updated this week
- 先进编译实验室的个人主页☆113Updated 3 months ago
- Implement custom operators in PyTorch with cuda/c++☆65Updated 2 years ago
- A tutorial for CUDA&PyTorch☆149Updated 6 months ago
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.🎉☆192Updated 2 months ago
- ☆25Updated last month
- Codes & examples for "CUDA - From Correctness to Performance"☆102Updated 8 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆58Updated 8 months ago
- 算子库☆16Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆102Updated 2 months ago
- how to learn PyTorch and OneFlow☆441Updated last year
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆64Updated this week
- ☆70Updated 2 years ago
- ☆31Updated 2 months ago
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆383Updated 2 weeks ago
- A PyTorch-like deep learning framework. Just for fun.☆155Updated last year
- b站上的课程☆75Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆98Updated last week
- DGEMM on KNL, achieve 75% MKL☆18Updated 3 years ago
- ☆137Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- ☆27Updated last year
- ☆149Updated 6 months ago
- This is a cross-chip platform collection of operators and a unified neural network library.☆17Updated last year