Watchfulio / dataset-generator
A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of the cost of prompting LLMs directly.
☆22Updated 7 months ago
Alternatives and similar repositories for dataset-generator:
Users that are interested in dataset-generator are comparing it to the libraries listed below
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- ☆48Updated 6 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆54Updated 5 months ago
- Training hybrid models for dummies.☆21Updated 3 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- ☆15Updated last month
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year
- Latent Large Language Models☆18Updated 8 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆22Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 3 weeks ago
- ☆22Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- ☆33Updated 10 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆31Updated this week
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 3 months ago
- ☆27Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆18Updated 7 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 5 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 4 months ago
- ☆63Updated 7 months ago
- Generate interleaved text and image content in a structured format you can directly pass to downstream APIs.☆27Updated 6 months ago
- Minimum Description Length probing for neural network representations☆19Updated 3 months ago