rasbt / gradient-accumulation-blogLinks
Finetuning BLOOM on a single GPU using gradient-accumulation
☆30Updated 2 years ago
Alternatives and similar repositories for gradient-accumulation-blog
Users that are interested in gradient-accumulation-blog are comparing it to the libraries listed below
Sorting:
- Tools for merging pretrained large language models.☆19Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last month
- ☆23Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆40Updated last week
- ☆16Updated last year
- Training and Inference Notebooks for the RedPajama (OpenLlama) models☆18Updated 2 years ago
- ☆14Updated 8 months ago
- Tools for content datamining and NLP at scale☆43Updated last year
- Minimal zero-shot intent classifier for arbitrary intent slot filling, via LLM prompting w LangChain.☆33Updated 2 years ago
- ☆37Updated 2 years ago
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆23Updated 3 weeks ago
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- Evaluation of bm42 sparse indexing algorithm☆68Updated 11 months ago
- ☆47Updated 4 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Finetune any model on HF in less than 30 seconds☆57Updated 2 months ago
- Repository containing awesome resources regarding Hugging Face tooling.☆47Updated last year
- ☆29Updated 5 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- A gzip-based text-classification system.☆33Updated last year
- ☆32Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Set of scripts to finetune LLMs☆37Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆57Updated last year
- ☆28Updated 2 years ago
- ☆20Updated 3 years ago