ggml-org / ciLinks
CI for ggml and related projects
☆31Updated 2 months ago
Alternatives and similar repositories for ci
Users that are interested in ci are comparing it to the libraries listed below
Sorting:
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆163Updated 7 months ago
- LLM-based code completion engine☆190Updated 10 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆32Updated 11 months ago
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆58Updated 2 years ago
- ☆164Updated 4 months ago
- Transformer GPU VRAM estimator☆67Updated last year
- LLM inference in C/C++☆103Updated last week
- AirLLM 70B inference with single 4GB GPU☆14Updated 5 months ago
- Utility library to work with character cards and roleplay AI in general☆45Updated 2 years ago
- C API for MLX☆155Updated last week
- Granite 3.1 Language Models☆131Updated 5 months ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- Command line tool for Deep Infra cloud ML inference service☆33Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆226Updated last year
- 🔊 We believe in a future where developers are amplified, not automated☆115Updated 2 months ago
- Gemma 2 optimized for your local machine.☆378Updated last year
- run embeddings in MLX☆96Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆200Updated 2 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Updated last month
- ☆117Updated 11 months ago
- Run GGML models with Kubernetes.☆175Updated last year
- ☆166Updated last year
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆23Updated last year
- ☆68Updated last year
- A super simple web interface to perform blind tests on LLM outputs.☆29Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- Deno build of the official Typescript library for the OpenAI API.☆142Updated last year
- For inferring and serving local LLMs using the MLX framework☆108Updated last year
- LLM powered development for IntelliJ☆84Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 9 months ago