Rotary Transformer
☆1,104Mar 21, 2022Updated 4 years ago
Alternatives and similar repositories for roformer
Users that are interested in roformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- RoFormer V1 & V2 pytorch☆524May 18, 2022Updated 3 years ago
- Fast and memory-efficient exact attention☆23,344Updated this week
- RoFormer升级版☆155Aug 11, 2022Updated 3 years ago
- SimBERT升级版(SimBERTv2)!☆443Mar 21, 2022Updated 4 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆806Jan 30, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Rectified Rotary Position Embeddings☆391May 20, 2024Updated last year
- a bert for retrieval and generation☆860Feb 26, 2021Updated 5 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Feb 24, 2023Updated 3 years ago
- Ongoing research training transformer models at scale☆16,073Updated this week
- keras implement of transformers for humans☆5,420Nov 11, 2024Updated last year
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,649Oct 16, 2024Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,086Jan 23, 2026Updated 2 months ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,157Jan 22, 2024Updated 2 years ago
- GLM (General Language Model)☆3,477Nov 3, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- 以词为基本单位的中文BERT☆477Nov 18, 2021Updated 4 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,929Apr 10, 2026Updated last week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Example models using DeepSpeed☆6,818Mar 30, 2026Updated 2 weeks ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,503Jan 14, 2026Updated 3 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,435Dec 17, 2024Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,300May 16, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆48Nov 30, 2021Updated 4 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,608Updated this week
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,662Jul 25, 2023Updated 2 years ago
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,141Updated this week
- 全局指针统一处理嵌套与非嵌套NER☆261May 2, 2021Updated 4 years ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,417Mar 30, 2026Updated 2 weeks ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,471Mar 30, 2026Updated 2 weeks ago
- 简单的向量白化改善句向量质量☆487Jun 17, 2021Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Open Language Pre-trained Model Zoo☆1,006Nov 18, 2021Updated 4 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,244Aug 14, 2025Updated 8 months ago
- Keras implement of Finite Scalar Quantization☆85Oct 31, 2023Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)☆10,197Jul 15, 2025Updated 9 months ago
- Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo☆3,104May 9, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,870Jun 10, 2024Updated last year