LLM360 / crystalcoder-data-prepLinks
Data preparation code for CrystalCoder 7B LLM
☆44Updated last year
Alternatives and similar repositories for crystalcoder-data-prep
Users that are interested in crystalcoder-data-prep are comparing it to the libraries listed below
Sorting:
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- Data preparation code for Amber 7B LLM☆90Updated last year
- ☆34Updated 11 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆25Updated 3 months ago
- ☆24Updated 8 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- ☆30Updated last month
- ☆49Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 3 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆26Updated 5 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆41Updated 6 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 6 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- ☆62Updated 10 months ago
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 7 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Code for KaLM-Embedding models☆77Updated 2 months ago
- ☆76Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆37Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Open Implementations of LLM Analyses☆103Updated 7 months ago