bjoernpl / llama_gradio_interfaceLinks
Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT
☆48Updated 2 years ago
Alternatives and similar repositories for llama_gradio_interface
Users that are interested in llama_gradio_interface are comparing it to the libraries listed below
Sorting:
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.☆41Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- ☆26Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Inference code for LLaMA 2 models☆30Updated 11 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQ☆129Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated last month
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- ☆32Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆27Updated last year
- ☆37Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- k_diffusion wrapper included for k_lms sampling. fixed for notebook.☆20Updated 2 years ago
- Merge LLM that are split in to parts☆26Updated last year
- ☆99Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆44Updated 2 years ago
- Let's try and finetune the OpenAI consistency decoder to work for SDXL☆24Updated last year
- 🎨 Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.☆50Updated 2 years ago
- manage histories of LLM applied applications☆90Updated last year