Transformer related optimization, including BERT, GPT
☆59Sep 20, 2023Updated 2 years ago
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- ☆22Jul 11, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆14Jun 27, 2023Updated 2 years ago
- ☆413Nov 11, 2023Updated 2 years ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆39Feb 10, 2023Updated 3 years ago
- [NeurIPS 2025@FoRLM] R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search☆17Jan 24, 2026Updated last month
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- [NeurIPS 2025] Multipole Attention for Efficient Long Context Reasoning☆22Dec 5, 2025Updated 3 months ago
- flexible-gemm conv of deepcore☆17Dec 2, 2019Updated 6 years ago
- ☆16Mar 30, 2024Updated last year
- A Tiny Project For ASR model training and Deployment☆26Oct 14, 2022Updated 3 years ago
- ☆128Dec 24, 2024Updated last year
- 关于AI,ML,DA,DV等的几个经典案例,包括堵车模拟(NagelSchreckenberg)、蒙特卡洛排队问题(Monte Carlo Queuing Problem)、人脸识别(RecognitionFace)、遗传算法推断图像(IconGenetic)☆10Oct 14, 2018Updated 7 years ago
- ☆12Oct 9, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆17Jul 29, 2023Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- vortex particles for simulating smoke in 2d☆16Dec 13, 2021Updated 4 years ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆128Jul 13, 2024Updated last year
- ☆150Jan 9, 2025Updated last year
- Yet another academic homepage builder.☆25Jul 1, 2020Updated 5 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- ☆42Nov 29, 2022Updated 3 years ago
- Memory footprint reduction for transformer models☆11Jan 24, 2023Updated 3 years ago
- CUDA project for uni subject☆26Oct 26, 2020Updated 5 years ago
- This is the official repo for "Differentiable Model Scaling using Differentiable Topk"☆12May 16, 2024Updated last year
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- Common source, scripts and utilities for creating Triton backends.☆369Mar 10, 2026Updated last week
- implement llava using candle☆15Jun 9, 2024Updated last year
- Implementation of algorithms for memory optimized deep neural network training☆10Jul 23, 2020Updated 5 years ago
- ICML2017 MEC: Memory-efficient Convolution for Deep Neural Network C++实现(非官方)☆17Apr 9, 2019Updated 6 years ago
- Making AI & LLM APPs components reusable, replaceable, portable, and flexible.☆24Apr 28, 2024Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Mar 13, 2026Updated last week
- 新词发现/新词挖掘/自由度/凝固度/python3☆10May 28, 2019Updated 6 years ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 3 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆197Feb 27, 2026Updated 3 weeks ago
- ☆169Feb 5, 2026Updated last month
- ☆120Apr 11, 2024Updated last year