bofenghuang / vigogneLinks
French instruction-following and chat models
☆505Updated 7 months ago
Alternatives and similar repositories for vigogne
Users that are interested in vigogne are comparing it to the libraries listed below
Sorting:
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆409Updated 2 years ago
- Backend ressources for Albert. Albert is a conversational agent that uses official French data sources to answer administrative agents qu…☆121Updated 3 months ago
- ✒️ Cedille is a large French language model (6B), released under an open-source license☆203Updated 3 years ago
- Tune any FALCON in 4-bit☆467Updated last year
- LLM that combines the principles of wizardLM and vicunaLM☆716Updated 2 years ago
- C++ implementation for BLOOM☆810Updated 2 years ago
- Official supported Python bindings for llama.cpp + gpt4all☆1,018Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆176Updated this week
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Tra…☆1,298Updated last year
- Awesome list of resources about NLP applied to French | Liste de ressources liées au NLP appliqué au français☆58Updated 5 years ago
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆472Updated 2 years ago
- Page de préconfiguration de la communauté OpenLLM-France☆47Updated last year
- Python bindings for llama.cpp☆197Updated 2 years ago
- ☆405Updated 2 years ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,868Updated last year
- An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.☆331Updated last year
- C++ implementation for 💫StarCoder☆455Updated last year
- Simple UI for LLM Model Finetuning☆2,062Updated last year
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆598Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,047Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- ☆168Updated 2 years ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆715Updated last year
- ☆535Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- Repository for the EM German Model☆110Updated last year
- TheBloke's Dockerfiles☆305Updated last year
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆313Updated last year
- Locally run an Assistant-Tuned Chat-Style LLM☆500Updated 2 years ago