Watchfulio / dataset-generator
A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of the cost of prompting LLMs directly.
☆21Updated last month
Related projects ⓘ
Alternatives and complementary repositories for dataset-generator
- ☆22Updated last year
- ☆38Updated this week
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Public reports detailing responses to sets of prompts by Large Language Models.☆25Updated last year
- Training hybrid models for dummies.☆15Updated last week
- Latent Large Language Models☆16Updated 2 months ago
- Measuring and Controlling Persona Drift in Language Model Dialogs☆11Updated 8 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆25Updated last year
- Data preparation code for CrystalCoder 7B LLM☆42Updated 6 months ago
- GoldFinch and other hybrid transformer components☆39Updated 3 months ago
- ☆26Updated 4 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆22Updated 7 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆19Updated 4 months ago
- ☆61Updated 2 months ago
- Plug in and Play implementation of "Certified Reasoning with Language Models" that elevates model reasoning by 40%☆15Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆38Updated 2 weeks ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆62Updated last year
- ☆24Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- ☆68Updated 2 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 8 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- ☆49Updated 7 months ago
- Generate High Quality textual or multi-modal datasets with Agents☆17Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆48Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆30Updated last month
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆36Updated last year