Transformer related optimization, including BERT, GPT
☆17Jul 29, 2023Updated 2 years ago
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- Transformer related optimization, including BERT, GPT☆39Feb 10, 2023Updated 3 years ago
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding☆14Jul 22, 2024Updated last year
- ☆15Nov 11, 2020Updated 5 years ago
- qwen2 and llama3 cpp implementation☆49Jun 7, 2024Updated last year
- ☆32Apr 19, 2025Updated 11 months ago
- DataFountain 疫情政务问答助手解决方案分享☆16May 2, 2020Updated 5 years ago
- Datafountain-Epidemic government affairs quiz assistant competition. We divided this task into two parts: document retrieval and answer e…☆14Aug 21, 2022Updated 3 years ago
- 本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。☆15Mar 9, 2021Updated 5 years ago
- ☆29Mar 27, 2023Updated 2 years ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆57Jul 23, 2024Updated last year
- ☆13May 25, 2023Updated 2 years ago
- a fast and customizable CUDA int4 tensor core gemm☆15Aug 2, 2024Updated last year
- ☆70Dec 9, 2022Updated 3 years ago
- let coding agents use ncu skills analysis cuda program automatically!☆61Feb 5, 2026Updated last month
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated last month
- TensorRT☆11Sep 22, 2020Updated 5 years ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Aug 16, 2023Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- ☆413Nov 11, 2023Updated 2 years ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆141Updated this week
- bilibili视频【CUDA 12.x 并行编程入门(C++版)】配套代码☆33Aug 12, 2024Updated last year
- [ICML 2025] RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression☆34Aug 7, 2025Updated 7 months ago
- ASR, End-to-End, end2end, Speech Recognition, 端到端语音识别☆12Oct 25, 2020Updated 5 years ago
- repository for the MICCAI 2022 AutoPET challenge☆14Sep 19, 2022Updated 3 years ago
- ☆128Dec 24, 2024Updated last year
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Oct 10, 2025Updated 5 months ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- c# library for decoding K2 transducer Models,used in speech recognition (ASR)☆13Aug 20, 2025Updated 7 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- 多集群使用thanos sidecar+MinIO监控告警☆15Feb 20, 2023Updated 3 years ago
- Yolov9 TensorRT Inference Python backend☆35Mar 16, 2024Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated last week
- MACCIA 2022 paper reading notes: tasks and datasets☆12Feb 6, 2023Updated 3 years ago
- ☆12May 20, 2020Updated 5 years ago
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- pytorch版的命名实体识别,LSTM和LSTM_CRF☆25Aug 16, 2019Updated 6 years ago