InfiniTensor / InfiniLM-RustLinks
☆126Updated 2 weeks ago
Alternatives and similar repositories for InfiniLM-Rust
Users that are interested in InfiniLM-Rust are comparing it to the libraries listed below
Sorting:
- ☆67Updated last year
- 算子库☆17Updated 6 months ago
- ☆285Updated last week
- 笔记☆50Updated 5 months ago
- A domain-specific language (DSL) based on Triton but providing higher-level abstractions.☆41Updated this week
- ☆42Updated last year
- easy cuda code☆95Updated last year
- 算子库(Rust)☆14Updated 6 months ago
- ☆69Updated last year
- ☆30Updated 3 weeks ago
- 实验:rust 实现 llama2 推理☆17Updated last year
- 分层解耦的深度学习推理引擎☆79Updated 11 months ago
- Codes & examples for "CUDA - From Correctness to Performance"☆121Updated last year
- RustSBI Specialized Domain Knowledge Quiz LLM☆104Updated 3 months ago
- A PyTorch-like deep learning framework. Just for fun.☆157Updated 2 years ago
- Large-scale Auto-Distributed Training/Inference Unified Framework | Memory-Compute-Control Decoupled Architecture | Multi-language SDK & …☆55Updated last week
- ☆13Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆102Updated last month
- 《自己动手写AI编译器》☆33Updated last year
- Open ABI and FFI for Machine Learning Systems☆313Updated last week
- Wiki fo HPC☆130Updated 6 months ago
- Flash Attention from Scratch on CUDA Ampere☆129Updated 5 months ago
- 训练营讲义☆21Updated last year
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆155Updated 5 months ago
- ☆69Updated 2 years ago
- ☆116Updated 3 weeks ago
- From Minimal GEMM to Everything☆101Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- CUDA SGEMM optimization note☆15Updated 2 years ago
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆64Updated 3 months ago