huggingface / cosmopedia
☆484Updated last month
Alternatives and similar repositories for cosmopedia:
Users that are interested in cosmopedia are comparing it to the libraries listed below
- Official repository for ORPO☆430Updated 7 months ago
- Generative Representational Instruction Tuning☆584Updated 2 months ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆565Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆447Updated 9 months ago
- An Open Source Toolkit For LLM Distillation☆425Updated last week
- The official evaluation suite and dynamic data release for MixEval.☆233Updated 2 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆247Updated 6 months ago
- awesome synthetic (text) datasets☆253Updated 2 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆297Updated last year
- RewardBench: the first evaluation tool for reward models.☆491Updated last week
- A bagel, with everything.☆315Updated 9 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆970Updated this week
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆538Updated 10 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆426Updated 4 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆346Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆785Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆687Updated 3 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆221Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆197Updated 2 months ago
- ☆303Updated 7 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆377Updated 3 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆388Updated 8 months ago
- DSIR large-scale data selection framework for language model training☆242Updated 9 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆324Updated last year
- ☆493Updated 4 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆382Updated 9 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆200Updated 2 months ago
- A simple unified framework for evaluating LLMs☆164Updated 3 weeks ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆524Updated last month
- Code for Quiet-STaR☆698Updated 4 months ago