ConiferLabsWA / flan-ul2-dollyLinks
☆34Updated 2 years ago
Alternatives and similar repositories for flan-ul2-dolly
Users that are interested in flan-ul2-dolly are comparing it to the libraries listed below
Sorting:
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- ☆32Updated 2 years ago
- ☆23Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Training and Inference Notebooks for the RedPajama (OpenLlama) models☆18Updated 2 years ago
- ☆47Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- ☆22Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 9 months ago
- Source codes for the paper "Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints"☆28Updated 2 years ago
- Using short models to classify long texts☆21Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆44Updated 7 months ago
- One stop shop for all things carp☆59Updated 2 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- ☆94Updated 6 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆64Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆35Updated last year
- ☆18Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- ☆37Updated 2 years ago
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year