ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆22Updated last month
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆52Updated 3 months ago
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆172Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆87Updated 10 months ago
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆54Updated last year
- Generative Fusion Decoding (GFD) is a novel framework for integrating Large Language Models (LLMs) into multi-modal text recognition syst…☆84Updated last month
- ☆48Updated last month
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆89Updated 3 weeks ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 7 months ago
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆50Updated 7 months ago
- ☆62Updated 4 months ago
- ☆18Updated 9 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆259Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆100Updated last year
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆13Updated 4 months ago
- just collections about Llama2☆44Updated last year
- finetune llama2 with traditional chinese dataset☆39Updated 2 years ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated last year
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆175Updated 3 months ago
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆57Updated 5 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆83Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆71Updated last year
- Official GitHub repository for paper "SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Informa…☆17Updated last month
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆100Updated 5 months ago
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆36Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆176Updated 5 months ago
- Unofficial implementation of AlpaGasus☆92Updated 2 years ago
- ☆25Updated 4 months ago
- Official release of StyleTalk dataset.☆69Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆59Updated last month