rahulunair / tiny_llm_finetuner
LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs
☆20Updated last year
Alternatives and similar repositories for tiny_llm_finetuner:
Users that are interested in tiny_llm_finetuner are comparing it to the libraries listed below
- Stable Difussion inference on Intel Arc dGPUs☆72Updated 10 months ago
- Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.☆152Updated last month
- AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models☆146Updated 8 months ago
- Native gui to serveral AI services plus llama.cpp local AIs.☆108Updated last year
- An Extension for oobabooga/text-generation-webui☆36Updated last year
- Like system requirements lab but for LLMs☆30Updated last year
- Creates an Langchain Agent which uses the WebUI's API and Wikipedia to work☆72Updated last year
- Flexible Python package for managing and extending LLM based agents☆25Updated last year
- An autonomous AI agent extension for Oobabooga's web ui☆176Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Midori AI's Mono Repo! Check out our site below!☆115Updated this week
- Add Large Language Models to Discord. Add DeepSeek R-1, Llama 3.3, Gemini, and other models.☆67Updated this week
- Easily create LLM automation/agent workflows☆58Updated 11 months ago
- ☆28Updated last year
- A front-end for selfhosted LLMs based on the LocalAI API☆68Updated 9 months ago
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- This plugin forces models to output JSON of a specified schema using JSONFormer☆26Updated 2 months ago
- Porting BabyAGI to Oobabooba.☆33Updated last year
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- GPU Power and Performance Manager☆52Updated 3 months ago
- Text WebUI extension to add clever Notebooks to Chat mode☆139Updated last year
- No-messing-around sh client for llama.cpp's server☆29Updated 5 months ago
- ☆55Updated last year
- ☆32Updated this week
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆54Updated last year
- "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp …☆35Updated last year
- Embeddings focused small version of Llama NLP model☆104Updated last year
- A KoboldAI-like memory extension for oobabooga's text-generation-webui☆107Updated 3 months ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year