abetlen / ggml-pythonLinks
Python bindings for ggml
☆141Updated 9 months ago
Alternatives and similar repositories for ggml-python
Users that are interested in ggml-python are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆187Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- GPTQ inference Triton kernel☆302Updated 2 years ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆288Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆370Updated last year
- RWKV in nanoGPT style☆191Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆152Updated last year
- Train your own small bitnet model☆72Updated 8 months ago
- LLM-based code completion engine☆194Updated 5 months ago
- Python bindings for llama.cpp☆197Updated 2 years ago
- LLaVA server (llama.cpp).☆180Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- 1.58-bit LLaMa model☆81Updated last year
- ☆118Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- ☆541Updated 7 months ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆312Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- ☆213Updated 5 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 9 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆172Updated 2 months ago
- Google TPU optimizations for transformers models☆113Updated 5 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- tinygrad port of the RWKV large language model.☆46Updated 3 months ago
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- Simple high-throughput inference library☆119Updated last month
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year