epfLLM / Megatron-LLM
distributed trainer for LLMs
☆557Updated 9 months ago
Alternatives and similar repositories for Megatron-LLM:
Users that are interested in Megatron-LLM are comparing it to the libraries listed below
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆585Updated 11 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆700Updated 4 months ago
- Scalable toolkit for efficient model alignment☆722Updated this week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆687Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,194Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 11 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆804Updated last week
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Minimalistic large language model 3D-parallelism training☆1,483Updated this week
- Official repository for ORPO☆437Updated 8 months ago
- Official repository for LongChat and LongEval☆519Updated 8 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,421Updated 10 months ago
- ☆496Updated 3 months ago
- Generative Representational Instruction Tuning☆596Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆534Updated 2 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆640Updated 8 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,449Updated 11 months ago
- All available datasets for Instruction Tuning of Large Language Models☆242Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆307Updated 4 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆626Updated last year
- Codebase for Merging Language Models (ICML 2024)☆795Updated 9 months ago
- Large Context Attention☆684Updated 3 weeks ago
- Official PyTorch implementation of QA-LoRA☆126Updated 11 months ago
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆630Updated last week
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆443Updated 10 months ago
- LOMO: LOw-Memory Optimization☆980Updated 7 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆389Updated 9 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆614Updated 7 months ago
- A simple and effective LLM pruning approach.☆712Updated 6 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆296Updated last year