PFCCLab / Starter
【HACKATHON 预备营】飞桨启航计划集训营
☆16Updated last week
Alternatives and similar repositories for Starter:
Users that are interested in Starter are comparing it to the libraries listed below
- 飞桨护航计划集训营☆18Updated this week
- 本项目为飞桨框架学习群的活动☆25Updated last year
- my cs notes☆44Updated 6 months ago
- 分层解耦的深度学习推理引擎☆72Updated 2 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆66Updated this week
- A light llama-like llm inference framework based on the triton kernel.☆108Updated last week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆49Updated 5 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接 入实现)☆82Updated this week
- PaddlePaddle Developer Community☆102Updated last week
- PFCC 社区博客☆11Updated this week
- b站上的课程☆74Updated last year
- ☆7Updated 6 months ago
- ☆235Updated 2 months ago
- ☆26Updated 3 months ago
- easy cuda code☆70Updated 3 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Attention with O(1) GPU SRAM complexity large headdim, 1.8x~3x↑🎉 faster than SDPA EA.☆168Updated 2 weeks ago
- CUDA 算子手撕与面试指南☆306Updated 3 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆82Updated 3 months ago
- ☆25Updated this week
- ☆121Updated last year
- ☆104Updated last month
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆40Updated last month
- Implement Flash Attention using Cute.☆74Updated 4 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- 模型压缩的小白入门教程☆268Updated 5 months ago
- ☆263Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated 2 weeks ago
- Codes & examples for "CUDA - From Correctness to Performance"☆96Updated 5 months ago
- some hpc project for learning☆21Updated 7 months ago
- Just a template for quickly creating a python library.☆8Updated last month