A fast batching API to serve LLM models
☆189Apr 26, 2024Updated last year
Alternatives and similar repositories for EricLLM
Users that are interested in EricLLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- A web-app to explore topics using LLM (less typing and more clicks)☆68Mar 15, 2026Updated last week
- Let's create synthetic textbooks together :)☆76Jan 29, 2024Updated 2 years ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,158Updated this week
- ☆31Jan 6, 2024Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 2 weeks ago
- function calling-based LLM agents☆291Sep 16, 2024Updated last year
- Controllable Language Model Interactions in TypeScript☆10May 17, 2024Updated last year
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Sep 14, 2025Updated 6 months ago
- reimagine the implementation of C-3PO droid voice synthesizer and multilingual translation and communication capabilities with the latest…☆12Mar 6, 2024Updated 2 years ago
- Scrape reddit posts into a single markdown file☆12Jul 28, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models☆168May 1, 2024Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- 5X faster 60% less memory QLoRA finetuning☆21May 28, 2024Updated last year
- Large-scale LLM inference engine☆1,677Mar 12, 2026Updated last week
- Native gui to serveral AI services plus llama.cpp local AIs.☆115Jan 14, 2024Updated 2 years ago
- ☆13Apr 25, 2025Updated 10 months ago
- ☆12Sep 22, 2024Updated last year
- Experimental sampler to make LLMs more creative☆31Aug 2, 2023Updated 2 years ago
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆350Feb 28, 2025Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆48Sep 26, 2024Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Jan 4, 2024Updated 2 years ago
- ☆135Nov 24, 2023Updated 2 years ago
- Web UI for ExLlamaV2☆510Feb 5, 2025Updated last year
- ☆229May 7, 2025Updated 10 months ago
- API for backtesting and trading automation☆16Mar 13, 2026Updated last week
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆106Oct 31, 2024Updated last year
- Docker compose to run vLLM on Windows☆116Jan 1, 2024Updated 2 years ago
- Physics Master is a model fine-tuned from llama3-8B-Instruct. It can answer your physics question!☆16Aug 24, 2024Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆686Mar 17, 2026Updated last week
- Blue-text Bot AI. Uses Ollama + AppleScript☆50May 19, 2024Updated last year
- Simple node proxy for llama-server that enables MCP use☆19May 10, 2025Updated 10 months ago
- Automated LLM novelist☆46Apr 11, 2024Updated last year
- ☆56Jun 26, 2025Updated 8 months ago
- Implementation of the Mamba SSM with hf_integration.☆55Aug 31, 2024Updated last year
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆51May 19, 2025Updated 10 months ago
- Simple Tool Caller for llama.cpp☆11Aug 12, 2024Updated last year
- BabyAGI to run with locally hosted models using the API from https://github.com/oobabooga/text-generation-webui☆86May 6, 2023Updated 2 years ago