OpenBMB / cpm_kernels
☆23Updated last year
Alternatives and similar repositories for cpm_kernels:
Users that are interested in cpm_kernels are comparing it to the libraries listed below
- Manages vllm-nccl dependency☆17Updated 8 months ago
- ☆18Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 8 months ago
- ☆76Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- A more efficient GLM implementation!☆55Updated last year
- ☆14Updated 10 months ago
- Models and examples built with OneFlow☆96Updated 4 months ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- OneFlow Serving☆20Updated last month
- ☆127Updated last month
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- ☆59Updated last week
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Whisper in TensorRT-LLM☆15Updated last year
- ☆65Updated 2 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆44Updated 3 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated 11 months ago
- ☆64Updated 2 months ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆96Updated 11 months ago
- ☆140Updated 9 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- ☆172Updated 4 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- ☆12Updated last year
- implement bert in pure c++☆36Updated 4 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆44Updated last year