Rayrtfr / FasterTransformerLinks
Transformer related optimization, including BERT, GPT
☆17Updated 2 years ago
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- ☆128Updated 8 months ago
- ☆140Updated last year
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆89Updated this week
- ☆79Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆54Updated last week
- simplify >2GB large onnx model☆63Updated 8 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated 3 weeks ago
- OneFlow Serving☆20Updated 4 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆104Updated 4 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Models and examples built with OneFlow☆98Updated 10 months ago
- ☆24Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- ☆90Updated 2 years ago
- ☆195Updated 3 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 6 months ago
- ☆59Updated 9 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆60Updated 9 months ago
- ☆31Updated 6 months ago
- OneFlow->ONNX☆43Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆111Updated last year
- export llama to onnx☆132Updated 7 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆177Updated 4 months ago
- ☆15Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆46Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优 化☆42Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆138Updated this week