LLM360 / k2-data-prepLinks
☆21Updated last year
Alternatives and similar repositories for k2-data-prep
Users that are interested in k2-data-prep are comparing it to the libraries listed below
Sorting:
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆67Updated 9 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆45Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆23Updated last year
- ☆55Updated last year
- ☆39Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- ☆17Updated 9 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 9 months ago
- ☆52Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆51Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆30Updated last year
- XmodelLM☆38Updated last year
- ☆27Updated last year
- ☆39Updated 8 months ago
- This is the official repository for Inheritune.☆120Updated 11 months ago
- ☆32Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- ☆63Updated last year
- Implementation of the Mamba SSM with hf_integration.☆55Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆114Updated 9 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- ☆54Updated this week
- Finetune any model on HF in less than 30 seconds☆56Updated last week
- ☆48Updated last year