ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆24Updated 6 months ago
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆52Updated 7 months ago
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆59Updated last year
- Generative Fusion Decoding (GFD) is a novel framework for integrating Large Language Models (LLMs) into multi-modal text recognition syst…☆87Updated 6 months ago
- finetune llama2 with traditional chinese dataset☆39Updated 2 years ago
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆109Updated 4 months ago
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- just collections about Llama2☆44Updated last year
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆173Updated last year
- ☆13Updated last year
- 台灣閩南語大型語言模型 (Taiwanese Hokkien LLMs)☆54Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- ☆50Updated last month
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆54Updated 11 months ago
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆13Updated last month
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆106Updated 7 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆95Updated last week
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆188Updated 2 months ago
- ☆79Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Updated 5 months ago
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆66Updated 9 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆189Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆249Updated 10 months ago
- 聯發創新基地(MediaTek Research) 致力於研究基礎模型。我們將研究體現在適合繁體中文使用者的模型上,並在使用權許可的情況下,提供模型給學術界研究或產業界使用。☆255Updated 4 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆132Updated 10 months ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆86Updated last year
- ☆218Updated 2 months ago