LLM360 / crystalcoder-data-prepLinks
Data preparation code for CrystalCoder 7B LLM
☆45Updated last year
Alternatives and similar repositories for crystalcoder-data-prep
Users that are interested in crystalcoder-data-prep are comparing it to the libraries listed below
Sorting:
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- ☆54Updated 10 months ago
- Open Implementations of LLM Analyses☆106Updated 11 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 10 months ago
- ☆39Updated last year
- This is the official repository for Inheritune.☆113Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Verifiers for LLM Reinforcement Learning☆72Updated 4 months ago
- ☆51Updated last year
- ☆67Updated 5 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- FuseAI Project☆87Updated 7 months ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 9 months ago
- ☆48Updated last year
- Score LLM pretraining data with classifiers☆55Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- ☆35Updated 2 years ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆25Updated 3 months ago
- ☆23Updated 2 weeks ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- A pipeline for LLM knowledge distillation☆107Updated 5 months ago
- ☆50Updated 11 months ago