NolanoOrg / llama-int4-quantLinks
☆26Updated 2 years ago
Alternatives and similar repositories for llama-int4-quant
Users that are interested in llama-int4-quant are comparing it to the libraries listed below
Sorting:
- ☆39Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Smol but mighty language model☆63Updated 2 years ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- inference code for mixtral-8x7b-32kseqlen☆103Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Rust bindings for CTranslate2☆14Updated 2 years ago
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated last year
- A sample pattern for running CI tests on Modal☆18Updated 7 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated last week
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- The Next Generation Multi-Modality Superintelligence☆70Updated last year
- Merge LLM that are split in to parts☆27Updated 4 months ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- Finetune any model on HF in less than 30 seconds☆56Updated last month
- Multi-Domain Expert Learning☆67Updated last year
- ☆74Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- ☆63Updated last year
- Tune MPTs☆84Updated 2 years ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Code base for internal reward models and PPO training☆24Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 8 months ago
- ☆18Updated 2 years ago
- ☆85Updated 2 years ago