mzbac / gptq-cuda-apiLinks
☆21Updated 2 years ago
Alternatives and similar repositories for gptq-cuda-api
Users that are interested in gptq-cuda-api are comparing it to the libraries listed below
Sorting:
- ☆74Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- A small standalone flask python server for llama.cpp that acts like a KoboldAI api.☆14Updated 2 years ago
- A guidance compatibility layer for llama-cpp-python☆36Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆16Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆147Updated 2 years ago
- Let's create synthetic textbooks together :)☆75Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Prompt Jinja2 templates for LLMs☆35Updated 6 months ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Roy: A lightweight, model-agnostic framework for crafting advanced multi-agent systems using large language models.☆78Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- ☆40Updated last year
- ☆168Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- LLM family chart☆52Updated 2 years ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- A repository to store helpful information and emerging insights in regard to LLMs☆21Updated 2 years ago
- A fast batching API to serve LLM models☆189Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆249Updated last year
- Local LLaMAs/Models in VSCode☆54Updated 2 years ago
- Who needs o1 anyways. Add CoT to any OpenAI compatible endpoint.☆44Updated last year
- LIVA - Local Intelligent Voice Assistant☆61Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated 2 years ago
- An OpenAI-like LLaMA inference API☆113Updated 2 years ago