LLM360 / k2-data-prepLinks
☆21Updated last year
Alternatives and similar repositories for k2-data-prep
Users that are interested in k2-data-prep are comparing it to the libraries listed below
Sorting:
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆39Updated last year
- ☆67Updated 9 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆45Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated last year
- ☆55Updated last year
- ☆17Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Official Repository for Task-Circuit Quantization☆24Updated 6 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- ☆53Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆30Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 8 months ago
- Multi-Granularity LLM Debugger [ICSE2026]☆94Updated 5 months ago
- ☆51Updated last year
- ☆63Updated last year
- Finetune any model on HF in less than 30 seconds☆56Updated 2 months ago
- FuseAI Project☆87Updated 11 months ago
- ☆27Updated last year
- ☆52Updated last year
- ☆48Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 3 months ago
- ☆40Updated last year