OpenBMB / cpm_kernelsLinks
☆24Updated last year
Alternatives and similar repositories for cpm_kernels
Users that are interested in cpm_kernels are comparing it to the libraries listed below
Sorting:
- ☆79Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆128Updated 7 months ago
- ☆19Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Models and examples built with OneFlow☆98Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated last week
- ☆23Updated 6 months ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆95Updated last year
- Manages vllm-nccl dependency☆17Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- ☆92Updated 4 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- implement bert in pure c++☆35Updated 5 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆124Updated last year
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆31Updated last week
- ☆16Updated last week
- ☆195Updated 3 months ago
- ☆120Updated last year
- ☆52Updated last week
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆246Updated last year
- A memory efficient DLRM training solution using ColossalAI☆105Updated 2 years ago