modular-ml / wrapyfi-examples_llama
Inference code for facebook LLaMA models with Wrapyfi support
☆130Updated last year
Related projects ⓘ
Alternatives and complementary repositories for wrapyfi-examples_llama
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆416Updated 11 months ago
- Tune MPTs☆84Updated last year
- Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT☆48Updated last year
- ☆454Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- ☆534Updated 11 months ago
- Inference code for LLaMA models☆45Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆220Updated last year
- Official repository for LongChat and LongEval☆512Updated 5 months ago
- 4 bits quantization of LLaMa using GPTQ☆130Updated last year
- ☆344Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆349Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆154Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆66Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- Plain pytorch implementation of LLaMA☆189Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- React app implementing OpenAI and Google APIs to re-create behavior of the toolformer paper.☆233Updated last year
- Tune any FALCON in 4-bit☆468Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 5 months ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆207Updated 8 months ago
- Harnessing the Memory Power of the Camelids☆146Updated last year