llm theoretical performance analysis tools and support params, flops, memory and latency analysis.
☆115Jul 11, 2025Updated 7 months ago
Alternatives and similar repositories for llm_counts
Users that are interested in llm_counts are comparing it to the libraries listed below
Sorting:
- A light llama-like llm inference framework based on the triton kernel.☆172Jan 5, 2026Updated 2 months ago
- how to learn PyTorch and OneFlow☆487Mar 22, 2024Updated last year
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆507Oct 28, 2025Updated 4 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆477Apr 19, 2025Updated 10 months ago
- 基于 CUDA Driver API 的 cuda 运行时环境☆15Jul 30, 2025Updated 7 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Nov 8, 2024Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆620Sep 11, 2024Updated last year
- YOLOv12 TensorRT 端到端模型加速推理和INT8量化实现☆13Mar 5, 2025Updated last year
- ☆13Jan 7, 2025Updated last year
- 基于Xilinx FPGA的通用型 CNN卷积神经网络加速器,本设计基于KV260板卡,MpSoC架构均可移植☆18Dec 13, 2024Updated last year
- the original reference implementation of a specified llama.cpp backend for Qualcomm Hexagon NPU on Android phone, https://github.com/ggml…☆38Jul 14, 2025Updated 7 months ago
- ☆13Sep 19, 2024Updated last year
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆867Dec 10, 2025Updated 2 months ago
- ☆15Jun 22, 2025Updated 8 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆255Feb 13, 2026Updated 3 weeks ago
- The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Inte…☆17Mar 28, 2019Updated 6 years ago
- JAX bindings for the flash-attention3 kernels☆22Jan 2, 2026Updated 2 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆73Sep 8, 2024Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆909Updated this week
- learning how CUDA works☆377Mar 3, 2025Updated last year
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,040Feb 27, 2026Updated last week
- 深度学习系统笔记,包含深度学习数学基础知识、神经网络基础部件详解、深度学习炼丹策略、模型压缩算法详解。☆513Dec 11, 2025Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 9 months ago
- how to optimize some algorithm in cuda.☆2,841Feb 28, 2026Updated last week
- ☆131Nov 11, 2024Updated last year
- ☆123Updated this week
- [TRETS 2025][FPGA 2024] FPGA Accelerator for Imbalanced SpMV using HLS☆20Aug 24, 2025Updated 6 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Oct 20, 2023Updated 2 years ago
- ☆315Oct 9, 2024Updated last year
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- 基于匈牙利匹配和卡尔曼滤波的SORT多目标跟踪算法。☆19Mar 10, 2023Updated 2 years ago
- 分层解耦的深度学习推理引擎☆79Feb 17, 2025Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆9,815Feb 25, 2026Updated last week
- A highly optimized LLM inference acceleration engine for Llama and its variants.☆906Updated this week
- ☆49Sep 5, 2020Updated 5 years ago
- PointPillars TensorRT version pretrained on MMDetection3d with WaymoOpenDataset☆21Aug 11, 2022Updated 3 years ago
- LLM Inference with Deep Learning Accelerator.☆59Jan 23, 2025Updated last year
- 校招、秋招、春招、实习好项目!带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library st…☆3,345Jun 22, 2025Updated 8 months ago