CarperAI / squeakily
A library for squeakily cleaning and filtering language datasets.
☆45Updated last year
Alternatives and similar repositories for squeakily:
Users that are interested in squeakily are comparing it to the libraries listed below
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- ☆22Updated last year
- ☆24Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Using short models to classify long texts☆21Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 11 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 4 months ago
- ☆32Updated last year
- ☆44Updated 2 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Embedding Recycling for Language models☆38Updated last year
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆37Updated last year
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and te…☆42Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- PyTorch implementation for MRL☆18Updated 11 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆48Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆55Updated 10 months ago
- Index of URLs to pdf files all over the internet and scripts☆21Updated last year
- [WIP] A 🔥 interface for running code in the cloud☆86Updated last year
- ☆48Updated last year
- Utilities for Training Very Large Models☆57Updated 4 months ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- Supercharge huggingface transformers with model parallelism.☆76Updated 4 months ago
- ☆60Updated last year