sunkx109 / llama.cppLinks
llama 2 Inference
☆43Updated 2 years ago
Alternatives and similar repositories for llama.cpp
Users that are interested in llama.cpp are comparing it to the libraries listed below
Sorting:
- ☆21Updated 4 years ago
- ☆152Updated last year
- 分层解耦的深度学习推理引擎☆79Updated 11 months ago
- A tutorial for CUDA&PyTorch☆227Updated last week
- ☆26Updated 5 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- ☆60Updated last year
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆53Updated 4 years ago
- ☆145Updated last year
- ☆98Updated 4 years ago
- ☆27Updated last year
- CUDA 6大并行计算模式 代码与笔记☆61Updated 5 years ago
- ☆38Updated last year
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆150Updated 2 weeks ago
- b站上的课程☆82Updated 2 years ago
- ☆141Updated last year
- ☆130Updated last year
- ☆21Updated last year
- ☆19Updated last year
- ☆26Updated 2 years ago
- ☆120Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆102Updated last month
- 大规模并行处理器编程实战 第二版答案☆35Updated 3 years ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆136Updated 2 years ago
- ☆118Updated 10 months ago
- mperf是一个面向移动/嵌入式平台的算子性能调优工具箱☆193Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆115Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆54Updated last year