LDLINGLINGLING / nano_vllm_noteLinks
注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能
☆91Updated 3 months ago
Alternatives and similar repositories for nano_vllm_note
Users that are interested in nano_vllm_note are comparing it to the libraries listed below
Sorting:
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆112Updated 4 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated last year
- A light llama-like llm inference framework based on the triton kernel.☆161Updated last month
- From Minimal GEMM to Everything☆73Updated this week
- Examples of CUDA implementations by Cutlass CuTe☆247Updated 4 months ago
- Implement Flash Attention using Cute.☆96Updated 10 months ago
- Summary of some awesome work for optimizing LLM inference☆135Updated last week
- ☆110Updated 5 months ago
- ☆143Updated last year
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆50Updated 2 years ago
- Codes & examples for "CUDA - From Correctness to Performance"☆115Updated last year
- A PyTorch-like deep learning framework. Just for fun.☆156Updated 2 years ago
- ☆149Updated 8 months ago
- High performance Transformer implementation in C++.☆141Updated 9 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆226Updated 3 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆286Updated 5 months ago
- ☆148Updated 4 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆87Updated 5 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆78Updated this week
- learning how CUDA works☆338Updated 8 months ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆50Updated last year
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆56Updated 3 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆443Updated 6 months ago
- ☆268Updated 2 weeks ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆43Updated last month
- Triton Documentation in Chinese Simplified / Triton 中文文档☆90Updated 7 months ago
- Optimize softmax in triton in many cases☆21Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆561Updated last month
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 8 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆114Updated 5 months ago