NolanoOrg / llama-int4-quant
☆26Updated last year
Alternatives and similar repositories for llama-int4-quant:
Users that are interested in llama-int4-quant are comparing it to the libraries listed below
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆37Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- ☆27Updated last year
- ☆16Updated 11 months ago
- Merge LLM that are split in to parts☆26Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Rust bindings for CTranslate2☆14Updated last year
- Fast inference of Instruct tuned LLaMa on your personal devices.☆22Updated last year
- ☆32Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- ☆14Updated last year
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆54Updated last year
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated 4 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- ☆54Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 5 months ago
- The Next Generation Multi-Modality Superintelligence☆71Updated 6 months ago
- Latent Large Language Models☆17Updated 6 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- ☆49Updated 11 months ago
- Tune MPTs☆84Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- RWKV model implementation☆37Updated last year