JulesGM / peft_qloraLinks
☆14Updated 2 years ago
Alternatives and similar repositories for peft_qlora
Users that are interested in peft_qlora are comparing it to the libraries listed below
Sorting:
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆310Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆208Updated last month
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,005Updated 10 months ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆573Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆634Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆214Updated 8 months ago
- A prize for finding tasks that cause large language models to show inverse scaling☆613Updated last year
- Convolutions for Sequence Modeling☆891Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆290Updated last year
- LLM finetuning in resource-constrained environments.☆50Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆583Updated 2 months ago
- Train very large language models in Jax.☆204Updated last year
- ☆256Updated last month
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆869Updated last year
- Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.☆293Updated last year
- Language Modeling with the H3 State Space Model☆519Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago
- ☆358Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆182Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆505Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆452Updated last year
- Tools to connect to and interact with the Mila cluster☆75Updated last week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆607Updated this week
- Simple next-token-prediction for RLHF☆228Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆348Updated 11 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)