LLM360 / crystalcoder-trainLinks
Pre-training code for CrystalCoder 7B LLM
☆55Updated last year
Alternatives and similar repositories for crystalcoder-train
Users that are interested in crystalcoder-train are comparing it to the libraries listed below
Sorting:
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Pre-training code for Amber 7B LLM☆169Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆66Updated 2 years ago
- ☆55Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated last year
- ☆39Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- This is the official repository for Inheritune.☆115Updated 9 months ago
- ☆17Updated 7 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 11 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- Code repository for the c-BTM paper☆108Updated 2 years ago
- ☆129Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆78Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- ☆80Updated 8 months ago
- ☆44Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated 11 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 7 months ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago