harleyszhang / lite_llamaLinks
A light llama-like llm inference framework based on the triton kernel.
☆122Updated this week
Alternatives and similar repositories for lite_llama
Users that are interested in lite_llama are comparing it to the libraries listed below
Sorting:
- ☆23Updated 3 weeks ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 6 months ago
- learning how CUDA works☆264Updated 3 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆92Updated this week
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆361Updated 2 months ago
- A CUDA tutorial to make people learn CUDA program from 0☆233Updated 10 months ago
- ☆134Updated last year
- 📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.☆184Updated 3 weeks ago
- ☆276Updated 7 months ago
- Examples of CUDA implementations by Cutlass CuTe☆188Updated 4 months ago
- CUDA 算子手撕与面试指南☆400Updated 4 months ago
- A tutorial for CUDA&PyTorch☆142Updated 4 months ago
- ☆58Updated 6 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆370Updated 3 weeks ago
- how to learn PyTorch and OneFlow☆433Updated last year
- ☆148Updated 4 months ago
- ☆139Updated last year
- A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.☆33Updated last year
- b站上的课程☆75Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated last month
- Implement Flash Attention using Cute.☆85Updated 5 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- ☆63Updated this week
- Optimize softmax in triton in many cases☆20Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆37Updated 3 months ago
- ☆36Updated 7 months ago
- easy cuda code☆73Updated 5 months ago
- ☆121Updated 5 months ago
- 分层解耦的深度学习推理引擎☆73Updated 3 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year