ZhuiyiTechnology / roformerView external linksLinks
Rotary Transformer
☆1,079Mar 21, 2022Updated 3 years ago
Alternatives and similar repositories for roformer
Users that are interested in roformer are comparing it to the libraries listed below
Sorting:
- RoFormer V1 & V2 pytorch☆519May 18, 2022Updated 3 years ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 weeks ago
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- Ongoing research training transformer models at scale☆15,162Updated this week
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,643Oct 16, 2024Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,021Jan 23, 2026Updated 3 weeks ago
- a bert for retrieval and generation☆860Feb 26, 2021Updated 4 years ago
- SimBERT升级版(SimBERTv2)!☆444Mar 21, 2022Updated 3 years ago
- keras implement of transformers for humans☆5,421Nov 11, 2024Updated last year
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,156Jan 22, 2024Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Feb 24, 2023Updated 2 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,989Feb 6, 2026Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Updated this week
- GLM (General Language Model)☆3,416Nov 3, 2023Updated 2 years ago
- Example models using DeepSpeed☆6,785Updated this week
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,491Jan 14, 2026Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Nov 30, 2021Updated 4 years ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,233Dec 17, 2024Updated last year
- RoFormer升级版☆154Aug 11, 2022Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,491Feb 6, 2026Updated last week
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304May 16, 2023Updated 2 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,336Feb 5, 2026Updated last week
- 以词为基本单位的中文BERT☆474Nov 18, 2021Updated 4 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,677Jul 25, 2023Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,578Updated this week
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Feb 2, 2022Updated 4 years ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,910Jan 26, 2026Updated 2 weeks ago
- 全局指针统一处理嵌套与非嵌套NER☆259May 2, 2021Updated 4 years ago
- Keras implement of Finite Scalar Quantization☆84Oct 31, 2023Updated 2 years ago
- Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo☆3,106May 9, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,224Aug 14, 2025Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year