ShinoharaHare / LLM-TrainingLinks
A distributed training framework for large language models powered by Lightning.
☆22Updated last month
Alternatives and similar repositories for LLM-Training
Users that are interested in LLM-Training are comparing it to the libraries listed below
Sorting:
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New …☆54Updated last year
- [Kaggle-2nd] Lightweight yet Effective Chinese LLM.☆51Updated 2 months ago
- ☆48Updated 2 weeks ago
- finetune llama2 with traditional chinese dataset☆39Updated 2 years ago
- ☆15Updated 8 months ago
- Generative Fusion Decoding (GFD) is a novel framework for integrating Large Language Models (LLMs) into multi-modal text recognition syst…☆84Updated last month
- A method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenizat…☆88Updated this week
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 9 months ago
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆171Updated last year
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆48Updated 6 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆77Updated this week
- ☆12Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆99Updated last year
- A Traditional-Chinese instruction-following model with datasets based on Alpaca.☆137Updated 2 years ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆82Updated last year
- Evaluation code for benchmarking VLMs in traditional chinese understanding☆13Updated 4 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 5 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆173Updated 2 months ago
- We systematically studied the influencing factors when LLM generates benchmarks,By using our code, you can generate high-quality QA datas…☆19Updated 3 months ago
- Nano repo for RL training of LLMs☆63Updated last week
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆267Updated 7 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- Unofficial implementation of AlpaGasus☆92Updated last year
- ☆110Updated last year
- just collections about Llama2☆44Updated 11 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆392Updated last week
- 台灣閩南語大型語言模型 (Taiwanese Hokkien LLMs)☆45Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆124Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆244Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Updated last year