ngxson / ggml-easyLinks
Thin wrapper around GGML to make life easier
☆40Updated last month
Alternatives and similar repositories for ggml-easy
Users that are interested in ggml-easy are comparing it to the libraries listed below
Sorting:
- Python bindings for ggml☆146Updated last year
- Use safetensors with ONNX 🤗☆76Updated 2 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- TTS support with GGML☆197Updated 2 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆148Updated 5 months ago
- Efficient non-uniform quantization with GPTQ for GGUF☆53Updated 2 months ago
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆200Updated 2 months ago
- 🤗 Optimum ONNX: Export your model to ONNX and run inference with ONNX Runtime☆95Updated last week
- ☆62Updated 4 months ago
- Simple high-throughput inference library☆150Updated 6 months ago
- Inference of Mamba models in pure C☆194Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- Yet Another (LLM) Web UI, made with Gemini☆12Updated 11 months ago
- High-throughput tensor loading for PyTorch☆209Updated this week
- Course Project for COMP4471 on RWKV☆17Updated last year
- A fast RWKV Tokenizer written in Rust☆54Updated 3 months ago
- ☆34Updated 8 months ago
- AirLLM 70B inference with single 4GB GPU☆14Updated 5 months ago
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 5 months ago
- A ggml (C++) re-implementation of tortoise-tts☆191Updated last year
- C API for MLX☆154Updated last week
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆47Updated last month
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 5 months ago
- 👷 Build compute kernels☆192Updated this week
- Browse, search, and visualize ONNX models.☆34Updated 7 months ago
- ☆64Updated 5 months ago
- ☆43Updated last month
- instinct.cpp provides ready to use alternatives to OpenAI Assistant API and built-in utilities for developing AI Agent applications (RAG,…☆54Updated last year