hhnqqq / GemmaLongTextLinks
☆16Updated last year
Alternatives and similar repositories for GemmaLongText
Users that are interested in GemmaLongText are comparing it to the libraries listed below
Sorting:
- Repository of LV-Eval Benchmark☆70Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Longitudinal Evaluation of LLMs via Data Compression☆33Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year
- ☆49Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated 10 months ago
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆28Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- ☆118Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆57Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆68Updated 2 months ago
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 4 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆124Updated 9 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆218Updated 3 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆64Updated last year
- The code for LaRA Benchmark☆44Updated 5 months ago
- qwen-nsa☆83Updated 2 weeks ago
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆29Updated last month
- Scaling Preference Data Curation via Human-AI Synergy