modular-ml / wrapyfi-examples_llama
Inference code for facebook LLaMA models with Wrapyfi support
☆130Updated last year
Alternatives and similar repositories for wrapyfi-examples_llama:
Users that are interested in wrapyfi-examples_llama are comparing it to the libraries listed below
- ☆456Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- ☆536Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- Inference code for LLaMA models☆46Updated last year
- minichatgpt - To Train ChatGPT In 5 Minutes☆167Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆163Updated last week
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Plain pytorch implementation of LLaMA☆189Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆185Updated last year
- Inference on CPU code for LLaMA models☆137Updated last year
- Official repository for LongChat and LongEval☆519Updated 8 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Tune MPTs☆84Updated last year
- Embeddings focused small version of Llama NLP model☆103Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆208Updated 11 months ago
- Tune any FALCON in 4-bit☆466Updated last year
- Instruct-tune LLaMA on consumer hardware☆72Updated last year
- Quantized inference code for LLaMA models☆1,052Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆226Updated last year
- ☆539Updated 2 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- Inference code for LLaMA models☆188Updated last year
- ☆412Updated last year