MoFHeka / LLaMA-Megatron
A LLaMA1/LLaMA12 Megatron implement.
☆28Updated last year
Alternatives and similar repositories for LLaMA-Megatron:
Users that are interested in LLaMA-Megatron are comparing it to the libraries listed below
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆92Updated 11 months ago
- ☆84Updated last year
- Repository of LV-Eval Benchmark☆58Updated 5 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆84Updated 3 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 10 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆31Updated last month
- Counting-Stars (★)☆78Updated 5 months ago
- ☆73Updated 6 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆109Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆53Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- ☆55Updated 2 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆281Updated this week
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated 7 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆306Updated 4 months ago
- ☆45Updated 7 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆138Updated 4 months ago
- [ICLR 2025] PEARL: parallel speculative decoding with adaptive draft length☆32Updated 5 months ago
- ☆106Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆153Updated 8 months ago
- ☆128Updated 9 months ago
- Transformer related optimization, including BERT, GPT☆39Updated last year
- ☆217Updated 8 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆73Updated 2 months ago
- ☆94Updated 4 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- Distributed IO-aware Attention algorithm☆18Updated 5 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆47Updated 5 months ago