modular-ml / wrapyfi-examples_llama
Inference code for facebook LLaMA models with Wrapyfi support
☆130Updated 2 years ago
Alternatives and similar repositories for wrapyfi-examples_llama:
Users that are interested in wrapyfi-examples_llama are comparing it to the libraries listed below
- ☆458Updated last year
- ☆535Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- ☆405Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Plain pytorch implementation of LLaMA☆188Updated last year
- React app implementing OpenAI and Google APIs to re-create behavior of the toolformer paper.☆233Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆351Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆289Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Official repository for LongChat and LongEval☆516Updated 10 months ago
- Fast Inference Solutions for BLOOM☆562Updated 5 months ago
- Inference code for LLaMA models☆46Updated 2 years ago
- ☆355Updated 2 years ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Quantized inference code for LLaMA models☆1,052Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Tune MPTs☆84Updated last year
- Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT☆48Updated 2 years ago
- Tune any FALCON in 4-bit☆466Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆166Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆205Updated 7 months ago