AlekseyKorshuk / chat-data-pipelineLinks
Chat data cleaning, filtering and deduplication pipeline.
☆19Updated 2 years ago
Alternatives and similar repositories for chat-data-pipeline
Users that are interested in chat-data-pipeline are comparing it to the libraries listed below
Sorting:
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Tools for content datamining and NLP at scale☆43Updated last year
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆116Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, …☆90Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 2 months ago
- ☆13Updated 2 years ago
- Code base for internal reward models and PPO training☆25Updated last year
- Tune MPTs☆84Updated 2 years ago
- Command-line script for inferencing from models such as LLaMA, in a chat scenario, with LoRA adaptations☆33Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- ☆33Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- manage histories of LLM applied applications☆91Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- ☆22Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- Utilities for Training Very Large Models☆58Updated 10 months ago
- Fine-tune Mistral 7B to generate fashion style suggestions☆34Updated last year
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- ☆32Updated last year
- ☆156Updated 2 years ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated last year