harleyszhang / lite_llamaLinks
A light llama-like llm inference framework based on the triton kernel.
☆128Updated last week
Alternatives and similar repositories for lite_llama
Users that are interested in lite_llama are comparing it to the libraries listed below
Sorting:
- ☆28Updated last month
- learning how CUDA works☆271Updated 3 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 7 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆369Updated this week
- A CUDA tutorial to make people learn CUDA program from 0☆234Updated 11 months ago
- ☆135Updated last year
- ☆278Updated 8 months ago
- A tutorial for CUDA&PyTorch☆146Updated 5 months ago
- ☆58Updated 7 months ago
- 分层解耦的深度学习推理引擎☆73Updated 4 months ago
- Examples of CUDA implementations by Cutlass CuTe☆197Updated 4 months ago
- CUDA 算子手撕与面试指南☆426Updated 5 months ago
- how to learn PyTorch and OneFlow☆435Updated last year
- ☆22Updated 3 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆377Updated last month
- A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.☆33Updated last year
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- b站上的课程☆76Updated last year
- ☆36Updated 8 months ago
- ☆148Updated 5 months ago
- ☆123Updated 6 months ago
- ☆139Updated last year
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- 高性能计算课程&CUDA编程实例&深度学习推理框架☆49Updated last year
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆31Updated 10 months ago
- CUDA C 编程权威指南代码实现 包含了书上第二章到第八章的大部分代码实现和作者笔记,全由作者本人手动实现,难免有错误的地方,请大家谨慎参考,非常欢迎对错误的指正。 如果有帮助的话请Star一下,对作者帮助很大,谢谢!☆345Updated 2 years ago
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆26Updated 2 months ago
- ☆69Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago