LLM Inference with Deep Learning Accelerator.
☆60Jan 23, 2025Updated last year
Alternatives and similar repositories for LLM-Inference-Acceleration
Users that are interested in LLM-Inference-Acceleration are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- introduce AI infra knowledges. 人工智能系统基础架构知识库☆16Jun 4, 2023Updated 2 years ago
- The Unified TileLink Memory Subsystem Tester for XiangShan☆14Apr 9, 2026Updated last week
- GPU-accelerated LLM Training Simulator☆18Jun 26, 2025Updated 9 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- 东南大学计算机组织与结构Ⅱ大作业☆13May 18, 2020Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- SEU COA Experimental Course CPU Simulation Code。东南大学计算机组织与结构II大作业☆19May 19, 2021Updated 4 years ago
- ☆13May 12, 2025Updated 11 months ago
- 东南大学信息学院计算机组成原理课设--利用Verilog实现CPU和POC的原理仿真 | SEU computer architecture project--CPU & POC simulation with verilogHDL☆44Feb 17, 2024Updated 2 years ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- Awesome code, projects, books, etc. related to CUDA☆32Mar 30, 2026Updated 2 weeks ago
- Reproducing R1 for Code with Reliable Rewards☆12Apr 9, 2025Updated last year
- A fork of Xiangshan for AI☆40Apr 8, 2026Updated last week
- ☆13Feb 1, 2024Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 3 weeks ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Mar 6, 2025Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 8 months ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆130May 3, 2025Updated 11 months ago
- Course Projects for Stanford CS142 Web Applications☆10Oct 15, 2016Updated 9 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆36Sep 15, 2023Updated 2 years ago
- 大规模并行处理器编程实战 第二版答案☆36Jun 4, 2022Updated 3 years ago
- An agent for CUDA compute-communication kernel co-design☆34Mar 24, 2026Updated 3 weeks ago
- Cyclone Jet Rocket is a DDoS tool for System Security Technology course☆11Jun 5, 2017Updated 8 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆11Jul 27, 2024Updated last year
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 4 years ago
- langgraph的deepagent源码分析☆16Jan 1, 2026Updated 3 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆117Jul 11, 2025Updated 9 months ago
- Secure and Scalable Federated Learning using Serverless Computing☆12Jan 31, 2024Updated 2 years ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆105Dec 17, 2025Updated 4 months ago
- [ICML 2025] Efficiently Serving Large Multimodal Models Using EPD Disaggregation☆24May 29, 2025Updated 10 months ago
- SIGCOMM 2021 artifact☆12Jul 27, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,144Apr 9, 2026Updated last week
- Mathematical expression evaluator with just in time code generation.☆12Apr 7, 2013Updated 13 years ago
- ☆10Jul 5, 2023Updated 2 years ago
- ☆31May 28, 2024Updated last year
- Kubernetes device plugin for Biren GPU☆11Oct 17, 2024Updated last year
- ☆31Dec 31, 2025Updated 3 months ago
- ☆14Apr 8, 2025Updated last year