A light llama-like llm inference framework based on the triton kernel.
☆184Jan 5, 2026Updated 4 months ago
Alternatives and similar repositories for lite_llama
Users that are interested in lite_llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆49Mar 4, 2026Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆117Jul 11, 2025Updated 9 months ago
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆532Oct 28, 2025Updated 6 months ago
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆880Apr 16, 2026Updated 3 weeks ago
- 一些采用opencv3图像处理库做的一些项目,有检测人脸位置、人脸特效、头顶加LOGO等☆11Oct 31, 2022Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- 一个轻量化的大模型推理框架☆23May 26, 2025Updated 11 months ago
- 校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library st…☆3,415Jun 22, 2025Updated 10 months ago
- 68th palce solution in Kaggle Humpback Whale Identification.☆11Jul 6, 2023Updated 2 years ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆981Updated this week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,865Updated this week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆65Nov 8, 2024Updated last year
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- 深度学习系统笔记,包含深度学习数学基础知识、神经网络基础部件详解、深度学习炼丹策略、模型压缩算法详解。☆517Dec 11, 2025Updated 4 months ago
- ☆144Mar 5, 2026Updated 2 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- 一款简单易用和高性能的AI部署框架 | An Easy-to-Use and High-Performance AI Deployment Framework☆1,801Apr 25, 2026Updated 2 weeks ago
- A single-file educational implementation for understanding vLLM's core concepts and running LLM inference.☆43Apr 7, 2026Updated last month
- A CUDA tutorial to make people learn CUDA program from 0☆279Jul 9, 2024Updated last year
- FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA.☆277Updated this week
- how to learn PyTorch and OneFlow☆497Mar 22, 2024Updated 2 years ago
- LLM Inference Engine: High-performance CUDA-accelerated framework for large language model inference A cutting-edge, open-source impleme…☆11Sep 29, 2024Updated last year
- From Minimal GEMM to Everything☆202Feb 10, 2026Updated 2 months ago
- how to optimize some algorithm in cuda.