ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆22Updated last week
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆52Updated last year
- ☆14Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- Unofficial implementation of AlpaGasus☆92Updated last year
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆51Updated last month
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆74Updated 3 weeks ago
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆47Updated 5 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆99Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆170Updated last year
- ☆47Updated 2 weeks ago
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆84Updated last month
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆41Updated last month
- We systematically studied the influencing factors when LLM generates benchmarks,By using our code, you can generate high-quality QA datas…☆19Updated 2 months ago
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆12Updated 3 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆166Updated last month
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆81Updated last year
- ☆17Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- just collections about Llama2☆44Updated 11 months ago
- ☆12Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 6 months ago
- finetune llama2 with traditional chinese dataset☆38Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆132Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆371Updated last month
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆225Updated 4 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆127Updated 4 months ago
- ☆71Updated 8 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated 10 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago