hhnqqq / GemmaLongTextLinks
☆16Updated last year
Alternatives and similar repositories for GemmaLongText
Users that are interested in GemmaLongText are comparing it to the libraries listed below
Sorting:
- Longitudinal Evaluation of LLMs via Data Compression☆33Updated last year
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆32Updated 2 weeks ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆47Updated last year
- ☆125Updated last year
- qwen-nsa☆87Updated 3 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆57Updated 2 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆120Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆96Updated 2 months ago
- ☆66Updated last year
- ☆15Updated 2 years ago
- The code for LaRA Benchmark☆46Updated 7 months ago
- Low-bit optimizers for PyTorch☆137Updated 2 years ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆258Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Updated last year
- ☆87Updated 5 months ago
- ☆109Updated 6 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- patches for huggingface transformers to save memory☆33Updated 7 months ago
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆35Updated 3 months ago
- ☆36Updated last year
- ☆81Updated last month
- ☆39Updated 6 months ago
- ☆127Updated 7 months ago
- ☆51Updated last year