LibreTranslate / LTEngineLinks
Local AI Machine Translation. Powered by LLMs. LibreTranslate compatible. π
β52Updated last month
Alternatives and similar repositories for LTEngine
Users that are interested in LTEngine are comparing it to the libraries listed below
Sorting:
- Toolkit for training/converting LibreTranslate compatible language models πβ63Updated 2 months ago
- Use an appropriate mix of LLMs based on https://nuenki.app/blog research to translate languages better than any one tool.β27Updated 2 months ago
- No Language Left Unlocked: scalable backtranslation of NLLB modelsβ13Updated last month
- Pushing the boundaries of AI storytellingβ94Updated this week
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.cβ¦β129Updated last month
- β154Updated last year
- A cli app for experimenting with kokoro voice creating and mixing using the available voices to interpolate new onesβ30Updated 7 months ago
- stable-diffusion.cpp bindings for pythonβ60Updated 2 weeks ago
- A tiny version of GPT fully implemented in Python with zero dependenciesβ72Updated 8 months ago
- A random walk voice style cloning application for Kokoro text to speechβ124Updated 2 months ago
- β48Updated 5 months ago
- Automatically convert epubs to audiobooksβ252Updated 5 months ago
- Lightweight Inference server for OpenVINOβ206Updated this week
- Run LLMs locally with as little friction as possible.β120Updated 4 months ago
- A LibreOffice Writer extension that adds local-inference generative AI features.β130Updated this week
- Automated speech dataset creatorβ194Updated 2 months ago
- Translate HTML using Argos Translateβ53Updated 2 years ago
- Free and open source pre-trained translation models, including Kurdish, Samoan, Xhosa, Lao, Corsican, Cebuano, Galician, Russian, Belarusβ¦β86Updated 3 weeks ago
- Convert your PDFs and EPUBs into audiobooks effortlessly. Features intelligent text extraction, customizable text-to-speech settings, andβ¦β112Updated 5 months ago
- Load and run Llama from safetensors files in Cβ11Updated 10 months ago
- Generate a llama-quantize command to copy the quantization parameters of any GGUFβ24Updated last month
- Run LLMs on AMD Ryzenβ’ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.β160Updated this week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity andβ¦β82Updated last week
- Context-aware LLM Translator (CALT)β39Updated 7 months ago
- A web application that converts speech to speech 100% privateβ75Updated 3 months ago
- Sparse Inferencing for transformer based LLMsβ197Updated 3 weeks ago
- An F/OSS solution combining AI with Wikipedia knowledge via a RAG pipelineβ57Updated 7 months ago
- MyOllama: Ollama-based LLM mobile clientβ165Updated last month
- A local front-end for open-weight LLMs with memory, RAG, TTS/STT, Elo ratings, and dynamic research tools. Built with React and FastAPI.β36Updated last month
- Python package wrapping llama.cpp for on-device LLM inferenceβ87Updated last month