ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆24Updated 6 months ago
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆52Updated 7 months ago
- Generative Fusion Decoding (GFD) is a novel framework for integrating Large Language Models (LLMs) into multi-modal text recognition syst…☆87Updated 6 months ago
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆59Updated last year
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆13Updated last month
- 台灣閩南語大型語言模型 (Taiwanese Hokkien LLMs)☆54Updated last year
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆109Updated 4 months ago
- finetune llama2 with traditional chinese dataset☆39Updated 2 years ago
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆54Updated 11 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- ☆51Updated last month
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆173Updated last year
- Code for the ICML 2025 paper "SelfCite Self-Supervised Alignment for Context Attribution in Large Language Models"☆23Updated this week
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆106Updated 7 months ago
- just collections about Llama2☆44Updated last year
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆32Updated 3 weeks ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆249Updated 10 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆95Updated last week
- Evaluate your agent memory on real-world dialogues, not LLM-simulated dialogues.☆36Updated 6 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆132Updated 10 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆188Updated 2 months ago
- The official implementation of the paper "Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models" (NeurIPS 2025 Pos…☆68Updated 4 months ago
- ☆21Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆143Updated 9 months ago
- ☆13Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆30Updated this week
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 7 months ago