kuleshov-group / llmtools
Finetuning Large Language Models on One Consumer GPU in 2 Bits
☆723Updated 11 months ago
Alternatives and similar repositories for llmtools
Users that are interested in llmtools are comparing it to the libraries listed below
Sorting:
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- Tune any FALCON in 4-bit☆467Updated last year
- LOMO: LOw-Memory Optimization☆986Updated 10 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆422Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆698Updated last year
- ☆535Updated last year
- ☆543Updated 5 months ago
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and bench…☆587Updated last year
- ☆458Updated last year
- batched loras☆342Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆711Updated last year
- A bagel, with everything.☆320Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,484Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆578Updated 10 months ago
- ☆412Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- Official repository for LongChat and LongEval☆519Updated 11 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆653Updated 11 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆687Updated 9 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,003Updated 8 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,051Updated 10 months ago
- ☆357Updated 2 years ago
- Alpaca dataset from Stanford, cleaned and curated☆1,553Updated 2 years ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,106Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,104Updated last year
- ☆534Updated 8 months ago
- Customizable implementation of the self-instruct paper.☆1,043Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,060Updated last year