bjoernpl / llama_gradio_interfaceLinks
Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT
☆48Updated 2 years ago
Alternatives and similar repositories for llama_gradio_interface
Users that are interested in llama_gradio_interface are comparing it to the libraries listed below
Sorting:
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- Yet Another LLaMA/ALPACA Discord Bot☆69Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.☆41Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆50Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Inference code for LLaMA models☆46Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆105Updated 2 years ago
- Merge LLM that are split in to parts☆26Updated 3 months ago
- Instruct-tune LLaMA on consumer hardware☆73Updated 2 years ago
- OpenAI API webserver☆189Updated 3 years ago
- ☆131Updated 3 years ago
- 4 bits quantization of LLaMa using GPTQ☆130Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆55Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆308Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- Conversion script adapting vicuna dataset into alpaca format for use with oobabooga's trainer☆12Updated 2 years ago
- ☆534Updated last year
- 📖 — Notebooks related to RWKV☆58Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- llama-4bit-colab☆62Updated 2 years ago
- Our data munging code.☆33Updated last week
- Command-line script for inferencing from models such as MPT-7B-Chat☆99Updated 2 years ago