MoFHeka / LLaMA-Megatron
A LLaMA1/LLaMA12 Megatron implement.
☆28Updated last year
Alternatives and similar repositories for LLaMA-Megatron:
Users that are interested in LLaMA-Megatron are comparing it to the libraries listed below
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆93Updated last year
- ☆84Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆39Updated 11 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆85Updated 4 months ago
- [ICLR 2025] PEARL: parallel speculative decoding with adaptive draft length☆47Updated this week
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆311Updated 2 weeks ago
- Repository of LV-Eval Benchmark☆58Updated 6 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆61Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 8 months ago
- Counting-Stars (★)☆79Updated 6 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆74Updated 3 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆33Updated 2 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆143Updated 5 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 10 months ago
- ☆116Updated this week
- An easy-to-use package for implementing SmoothQuant for LLMs☆93Updated 9 months ago
- ☆96Updated 5 months ago
- ☆58Updated 3 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- ☆44Updated 8 months ago
- 怎么训练一个LLM分词器☆141Updated last year
- ☆14Updated last year
- Multi-Candidate Speculative Decoding☆34Updated 10 months ago