bjoernpl / llama_gradio_interfaceLinks
Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT
☆48Updated 2 years ago
Alternatives and similar repositories for llama_gradio_interface
Users that are interested in llama_gradio_interface are comparing it to the libraries listed below
Sorting:
- 4 bits quantization of SantaCoder using GPTQ☆50Updated 2 years ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- Inference code for LLaMA models☆46Updated 2 years ago
- manage histories of LLM applied applications☆87Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- ☆27Updated last year
- Inference code for LLaMA 2 models☆30Updated 10 months ago
- Conversion script adapting vicuna dataset into alpaca format for use with oobabooga's trainer☆12Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- ☆31Updated last year
- Tune MPTs☆84Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆114Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated last year
- Merge LLM that are split in to parts☆25Updated last year
- 4 bits quantization of LLaMa using GPTQ☆129Updated 2 years ago
- Model REVOLVER, a human in the loop model mixing system.☆32Updated last year
- Inference code for LLaMA models☆35Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated 2 years ago
- 8-bit CUDA functions for PyTorch☆43Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year