devbrones / llama-prompts
A collection of prompts for Llama
☆96Updated last year
Related projects ⓘ
Alternatives and complementary repositories for llama-prompts
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Merge Transformers language models by use of gradient parameters.☆201Updated 3 months ago
- Harnessing the Memory Power of the Camelids☆145Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆245Updated 10 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Updated last year
- Small finetuned LLMs for a diverse set of useful tasks☆123Updated last year
- Python examples using the bigcode/tiny_starcoder_py 159M model to generate code☆44Updated last year
- GPT-2 small trained on phi-like data☆65Updated 9 months ago
- Text WebUI extension to add clever Notebooks to Chat mode☆133Updated 10 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆66Updated last year
- The code we currently use to fine-tune models.☆109Updated 6 months ago
- ☆72Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆137Updated 2 months ago
- Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and Kobol…☆212Updated last year
- A prompt/context management system☆165Updated last year
- An Extension for oobabooga/text-generation-webui☆36Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆155Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆110Updated last year
- An OpenAI-like LLaMA inference API☆111Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Experimental sampler to make LLMs more creative☆30Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- Python bindings for llama.cpp☆63Updated 8 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆102Updated last year