samuel-vitorino / lm.rs-webuiLinks
Light WebUI for lm.rs
☆24Updated 9 months ago
Alternatives and similar repositories for lm.rs-webui
Users that are interested in lm.rs-webui are comparing it to the libraries listed below
Sorting:
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated last year
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- ☆22Updated 6 months ago
- Run AI models anywhere. https://muna.ai/explore☆63Updated this week
- A Multi-Agentic AI Assistant/Builder☆23Updated last week
- A Python library to orchestrate LLMs in a neural network-inspired structure☆49Updated 9 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 8 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆55Updated 8 months ago
- Editor with LLM generation tree exploration☆73Updated 5 months ago
- Spotlight-like client for Ollama on Windows.☆28Updated last year
- Run Vision LLMs, TTS and STT APIs. Website and API for https://text-generator.io☆37Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆67Updated last month
- The fastest CLI tool for prompting LLMs. Including support for prompting several LLMs at once!☆88Updated 2 months ago
- George is an API leveraging AI to make it easy to control a computer with natural language.☆48Updated 7 months ago
- Lightweight C inference for Qwen3 GGUF with the smallest (0.6B) at the fullest (FP32)☆12Updated this week
- AI Assistant☆20Updated 3 months ago
- ☆24Updated 6 months ago
- Like system requirements lab but for LLMs☆30Updated 2 years ago
- AirLLM 70B inference with single 4GB GPU☆14Updated last month
- ☆24Updated 4 months ago
- fast state-of-the-art speech models and a runtime that runs anywhere 💥☆55Updated last month
- Locally running LLM with internet access☆96Updated last month
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆90Updated last month
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- A lightweight code assistant with tool-using capabilities built on HuggingFace's smolagents.☆36Updated last month
- A CLI in Rust to generate synthetic data for MLX friendly training☆24Updated last year
- ☆49Updated last year
- LLM Divergent Thinking Creativity Benchmark. LLMs generate 25 unique words that start with a given letter with no connections to each oth…☆31Updated 4 months ago
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆98Updated 3 weeks ago