ZhuiyiTechnology / GAU-alphaLinks
基于Gated Attention Unit的Transformer模型(尝鲜版)
☆98Updated 2 years ago
Alternatives and similar repositories for GAU-alpha
Users that are interested in GAU-alpha are comparing it to the libraries listed below
Sorting:
- FLASHQuad_pytorch☆67Updated 3 years ago
- RoFormer升级版☆152Updated 2 years ago
- TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, …☆38Updated 2 months ago
- A Tight-fisted Optimizer☆48Updated 2 years ago
- Lion and Adam optimization comparison☆61Updated 2 years ago
- Ladder Side-Tuning在CLUE上的简单尝试☆21Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- A paper list of pre-trained language models (PLMs).☆80Updated 3 years ago
- R-Drop方法在中文任务上的简单实验☆91Updated 3 years ago
- RoFormer V1 & V2 pytorch☆498Updated 3 years ago
- 实现了Transformer中的几种位置编码方案☆44Updated 3 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆56Updated last year
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆38Updated 2 years ago
- 简洁易用版TinyBert:基于Bert进行知识蒸馏的预训练语言模型☆265Updated 4 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆103Updated 2 years ago
- ☆53Updated 3 years ago
- Finetune CPM-2☆82Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- 飞桨可信AI☆186Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆206Updated last year
- 真 · “Deep Learning for Humans”☆141Updated 3 years ago
- Python下shuffle几百G文件☆33Updated 3 years ago
- Pattern-Exploiting Training在中文上的简单实验☆171Updated 4 years ago
- 中文 Instruction tuning datasets☆131Updated last year
- P-tuning方法在中文上的简单实验☆139Updated 4 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- Apply the Circular to the Pretraining Model☆37Updated 3 years ago
- Adversarial Training for NLP in Keras☆46Updated 5 years ago