kuleshov / minillm
MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs
☆897Updated last year
Alternatives and similar repositories for minillm:
Users that are interested in minillm are comparing it to the libraries listed below
- ☆1,455Updated last year
- Simple UI for LLM Model Finetuning☆2,059Updated last year
- Quantized inference code for LLaMA models☆1,051Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆42Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,537Updated last year
- C++ implementation for BLOOM☆810Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆819Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,042Updated 7 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆718Updated 9 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,457Updated 6 months ago
- ☆536Updated last year
- LLM as a Chatbot Service☆3,305Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated 11 months ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,626Updated last year
- Fork of Facebooks LLaMa model to run on CPU☆772Updated last year
- Customizable implementation of the self-instruct paper.☆1,038Updated 11 months ago
- Salesforce open-source LLMs with 8k sequence length.☆716Updated last month
- ☆457Updated last year
- Inference code for Persimmon-8B☆416Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,058Updated 11 months ago
- Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-sour…☆2,638Updated 5 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆706Updated last year
- C++ implementation for 💫StarCoder☆451Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,843Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,044Updated 11 months ago
- Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.☆585Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,391Updated 2 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,663Updated 2 months ago