jllllll / exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
☆66Updated 11 months ago
Related projects: ⓘ
- An unsupervised model merging algorithm for Transformers-based language models.☆96Updated 4 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆71Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆139Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆193Updated last month
- Falcon LLM ggml framework with CPU and GPU support☆245Updated 7 months ago
- ☆26Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆54Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆30Updated last year
- Train Llama Loras Easily☆29Updated last year
- ☆28Updated this week
- Wheels for llama-cpp-python compiled with cuBLAS support☆91Updated 7 months ago
- A collection of prompts to challenge the reasoning abilities of large language models in presence of misguiding information☆51Updated this week
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆154Updated 11 months ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆74Updated 5 months ago
- Python bindings for llama.cpp☆62Updated 6 months ago
- ☆48Updated this week
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆229Updated 3 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆37Updated 10 months ago
- Traing PRO extension for oobabooga WebUI - recent dev version☆44Updated 2 weeks ago
- 4 bits quantization of LLaMa using GPTQ☆129Updated last year
- Easily view and modify JSON datasets for large language models☆55Updated this week
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆107Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆40Updated 5 months ago
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆52Updated 11 months ago
- A fast batching API to serve LLM models☆172Updated 4 months ago