Watchfulio / dataset-generatorLinks
A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of the cost of prompting LLMs directly.
☆23Updated last year
Alternatives and similar repositories for dataset-generator
Users that are interested in dataset-generator are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- ☆56Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆69Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆66Updated 2 years ago
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆46Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Latent Large Language Models☆19Updated last year
- ☆41Updated last year
- Simple GRPO scripts and configurations.☆59Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Pre-training code for CrystalCoder 7B LLM☆57Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- ☆22Updated 2 years ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- ☆17Updated 10 months ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Updated last year
- ☆63Updated last year
- Evaluating LLMs with CommonGen-Lite☆94Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆112Updated last year
- Multi-Domain Expert Learning☆67Updated 2 years ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Based on the tree of thoughts paper☆48Updated 2 years ago