ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆22Updated 3 months ago
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆52Updated 4 months ago
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆55Updated last year
- Generative Fusion Decoding (GFD) is a novel framework for integrating Large Language Models (LLMs) into multi-modal text recognition syst…☆85Updated 3 months ago
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆173Updated last year
- just collections about Llama2☆44Updated last year
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆13Updated 6 months ago
- ☆49Updated 2 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆89Updated 11 months ago
- ☆13Updated last year
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated 2 weeks ago
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆51Updated 8 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆181Updated last year
- Evaluate your agent memory on real-world dialogues, not LLM-simulated dialogues.☆31Updated 3 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- ☆68Updated 5 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆125Updated 7 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆179Updated 7 months ago
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆96Updated last month
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆268Updated 9 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆126Updated last year
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆60Updated 6 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆100Updated 5 months ago
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- ☆18Updated 10 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 6 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 7 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆104Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆180Updated 4 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆53Updated 5 months ago