CarperAI / treasure_troveLinks
☆22Updated last year
Alternatives and similar repositories for treasure_trove
Users that are interested in treasure_trove are comparing it to the libraries listed below
Sorting:
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- ☆47Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 11 months ago
- Multi-Domain Expert Learning☆67Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Chat Markup Language conversation library☆55Updated last year
- ☆61Updated last year
- ☆93Updated last year
- ☆49Updated last year
- QLoRA for Masked Language Modeling☆22Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆63Updated 11 months ago
- ☆23Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…