eugenepentland / landmark-attention-qloraLinks
Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA
☆124Updated 2 years ago
Alternatives and similar repositories for landmark-attention-qlora
Users that are interested in landmark-attention-qlora are comparing it to the libraries listed below
Sorting:
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last week
- An unsupervised model merging algorithm for Transformers-based language models.☆107Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- ☆74Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆248Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- A prompt/context management system☆170Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- Harnessing the Memory Power of the Camelids☆146Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 11 months ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- ☆168Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆413Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆32Updated 2 years ago
- A fast batching API to serve LLM models☆187Updated last year
- ☆40Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Experimental LLM Inference UX to aid in creative writing☆123Updated 9 months ago