Troyanovsky / Local-LLM-Comparison-Colab-UILinks
Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
☆1,068Updated last week
Alternatives and similar repositories for Local-LLM-Comparison-Colab-UI
Users that are interested in Local-LLM-Comparison-Colab-UI are comparing it to the libraries listed below
Sorting:
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,868Updated last year
- Self-evaluating interview for AI coders☆589Updated 3 weeks ago
- TheBloke's Dockerfiles☆305Updated last year
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆598Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,887Updated last year
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆472Updated 2 years ago
- Large-scale LLM inference engine☆1,477Updated this week
- Querying local documents, powered by LLM☆612Updated this week
- Software to implement GoT with a weviate vectorized database☆671Updated 3 months ago
- Customizable implementation of the self-instruct paper.☆1,047Updated last year
- ☆168Updated 2 years ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆578Updated 5 months ago
- LLM that combines the principles of wizardLM and vicunaLM☆716Updated 2 years ago
- Web UI for ExLlamaV2☆504Updated 5 months ago
- ☆643Updated 3 weeks ago
- Tune any FALCON in 4-bit☆467Updated last year
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆310Updated last year
- Curated list of useful LLM / Analytics / Datascience resources☆2,412Updated 2 months ago
- function calling-based LLM agents☆287Updated 10 months ago
- ☆275Updated 2 years ago
- Ship RAG based LLM web apps in seconds.☆995Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 8 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,236Updated this week
- Official supported Python bindings for llama.cpp + gpt4all☆1,018Updated 2 years ago
- ☆1,489Updated last year
- An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.☆445Updated last year
- Fine-tuning LLMs using QLoRA☆258Updated last year
- Plugin that lets you ask questions about your documents including audio and video files.☆342Updated last week
- The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B☆285Updated last year
- This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.☆282Updated last year