kuleshov / minillm
MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs
☆866Updated last year
Related projects ⓘ
Alternatives and complementary repositories for minillm
- Quantized inference code for LLaMA models☆1,051Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,402Updated 2 months ago
- Simple UI for LLM Model Finetuning☆2,046Updated 10 months ago
- ☆1,426Updated last year
- 4 bits quantization of LLaMA using GPTQ☆2,994Updated 3 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆706Updated 5 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,515Updated last year
- ☆534Updated 11 months ago
- Customizable implementation of the self-instruct paper.☆1,019Updated 8 months ago
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,456Updated 5 months ago
- ☆453Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,618Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆26Updated last year
- C++ implementation for BLOOM☆811Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆811Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,759Updated last year
- Tune any FALCON in 4-bit☆468Updated last year
- Fork of Facebooks LLaMa model to run on CPU☆771Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,923Updated 7 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,267Updated last week
- A tiny library for coding with large language models.☆1,212Updated 4 months ago
- A school for camelids☆1,208Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,041Updated 9 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆702Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,811Updated 9 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆968Updated 2 months ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,053Updated 8 months ago
- C++ implementation for 💫StarCoder☆445Updated last year
- UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT…☆444Updated last year