Oneflow-Inc / diffusers
☆15Updated 7 months ago
Related projects: ⓘ
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Updated 3 months ago
- ☆32Updated 6 months ago
- ☆23Updated 11 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆39Updated 11 months ago
- zero零训练llm调参☆30Updated last year
- ☆123Updated 9 months ago
- OneFlow Serving☆20Updated 7 months ago
- ☆25Updated 4 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆121Updated 3 months ago
- OneFlow->ONNX☆41Updated last year
- Datasets, Transforms and Models specific to Computer Vision☆82Updated 10 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆25Updated 6 months ago
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆14Updated this week
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆47Updated last year
- ☆33Updated last year
- A light proxy solution for HuggingFace hub.☆43Updated 10 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆35Updated 4 months ago
- run ChatGLM2-6B in BM1684X☆48Updated 6 months ago
- 我们是第一个完全可商用的角色大模型。☆31Updated last month
- mllm-npu: training multimodal large language models on Ascend NPUs☆77Updated 3 weeks ago
- qwen2 and llama3 cpp implementation☆34Updated 3 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆35Updated last week
- Whisper in TensorRT-LLM☆14Updated last year
- simplify >2GB large onnx model☆41Updated 6 months ago
- ☆79Updated this week
- ☆105Updated last week
- Transformer related optimization, including BERT, GPT☆17Updated last year
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆185Updated 3 weeks ago
- ☆70Updated 9 months ago
- ☆18Updated 8 months ago