NolanoOrg / llama-int4-quant
☆26Updated 2 years ago
Alternatives and similar repositories for llama-int4-quant
Users that are interested in llama-int4-quant are comparing it to the libraries listed below
Sorting:
- ☆39Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Rust bindings for CTranslate2☆14Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Fast inference of Instruct tuned LLaMa on your personal devices.☆22Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆122Updated 2 years ago
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- The Next Generation Multi-Modality Superintelligence☆71Updated 8 months ago
- RWKV model implementation☆37Updated last year
- Tools for content datamining and NLP at scale☆43Updated 10 months ago
- Merge LLM that are split in to parts☆26Updated last year
- tinygrad port of the RWKV large language model.☆44Updated 2 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Latent Large Language Models☆18Updated 8 months ago
- Instruct-tune LLaMA on consumer hardware☆74Updated last year
- ☆32Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆19Updated 2 years ago
- ☆33Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆42Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year