99991 / pyggufLinks
GGUF parser in Python
☆28Updated last year
Alternatives and similar repositories for pygguf
Users that are interested in pygguf are comparing it to the libraries listed below
Sorting:
- Inference of Mamba models in pure C☆196Updated last year
- Python bindings for ggml☆146Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆268Updated last month
- Simple high-throughput inference library☆155Updated 7 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Experiments with BitNet inference on CPU☆55Updated last year
- QuIP quantization☆61Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- RWKV-7: Surpassing GPT☆103Updated last year
- ☆51Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Updated 2 years ago
- ☆120Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- RWKV in nanoGPT style☆197Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated this week
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Visualize expert firing frequencies across sentences in the Mixtral MoE model☆18Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- ☆50Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- Inference RWKV v7 in pure C.☆43Updated 2 months ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- ☆114Updated this week
- A pipeline for LLM knowledge distillation☆112Updated 9 months ago
- Gpu benchmark☆73Updated 11 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 9 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆228Updated this week