LLM360 / amber-trainLinks
Pre-training code for Amber 7B LLM
☆170Updated last year
Alternatives and similar repositories for amber-train
Users that are interested in amber-train are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- Scaling Data-Constrained Language Models☆343Updated 6 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆266Updated last year
- ☆95Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆221Updated 2 years ago
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 10 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- This is the official repository for Inheritune.☆118Updated 10 months ago