NolanoOrg / llama-int4-quant
☆26Updated last year
Alternatives and similar repositories for llama-int4-quant:
Users that are interested in llama-int4-quant are comparing it to the libraries listed below
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Tools for content datamining and NLP at scale☆42Updated 7 months ago
- tinygrad port of the RWKV large language model.☆44Updated 7 months ago
- Latent Large Language Models☆17Updated 5 months ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated last year
- ☆40Updated last year
- Rust bindings for CTranslate2☆14Updated last year
- ☆36Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆53Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 10 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 9 months ago
- ☆54Updated last year
- Fast inference of Instruct tuned LLaMa on your personal devices.☆22Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Instruct-tuning LLaMA on consumer hardware