Alignment-Lab-AI / datagen
a pipeline for using api calls to agnostically convert unstructured data into structured training data
☆30Updated 6 months ago
Alternatives and similar repositories for datagen:
Users that are interested in datagen are comparing it to the libraries listed below
- ☆22Updated last year
- ☆24Updated last year
- ☆48Updated last year
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆48Updated 5 months ago
- QLoRA for Masked Language Modeling☆22Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- Writing Blog Posts with Generative Feedback Loops!☆47Updated last year
- Public reports detailing responses to sets of prompts by Large Language Models.☆30Updated 3 months ago
- A sample pattern for running CI tests on Modal☆17Updated this week
- ☆40Updated 2 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 2 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆48Updated this week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆10Updated 5 months ago
- ☆20Updated last year
- ☆22Updated 11 months ago
- PyTorch implementation for MRL☆18Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆34Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Training hybrid models for dummies.☆20Updated 2 months ago
- ☆30Updated 9 months ago
- Creating Generative AI Apps which work☆17Updated 9 months ago
- Training and Inference Notebooks for the RedPajama (OpenLlama) models☆18Updated last year
- ☆19Updated last year