abetlen / ggml-pythonLinks
Python bindings for ggml
☆146Updated 11 months ago
Alternatives and similar repositories for ggml-python
Users that are interested in ggml-python are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆190Updated last year
- LLM-based code completion engine☆194Updated 7 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 10 months ago
- RWKV in nanoGPT style☆192Updated last year
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆292Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆516Updated 2 months ago
- Python bindings for llama.cpp☆199Updated 2 years ago
- LLaVA server (llama.cpp).☆181Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆57Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- FlashAttention (Metal Port)☆520Updated 11 months ago
- ☆552Updated 9 months ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- GGUF implementation in C as a library and a tools CLI program☆283Updated 7 months ago
- ggml implementation of BERT☆493Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 10 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆377Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆186Updated last year
- llama.cpp to PyTorch Converter☆34Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆864Updated last week
- Experiments with BitNet inference on CPU☆54Updated last year
- 1.58-bit LLaMa model☆82Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆157Updated 2 weeks ago
- GPTQ inference Triton kernel☆306Updated 2 years ago