Code for the ALiBi method for transformer language models (ICLR 2022)
☆555Oct 30, 2023Updated 2 years ago
Alternatives and similar repositories for attention_with_linear_biases
Users that are interested in attention_with_linear_biases are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation☆32Dec 29, 2021Updated 4 years ago
- ☆20Oct 25, 2022Updated 3 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,690Apr 17, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Transformers at any scale☆42Jan 18, 2024Updated 2 years ago
- Fast and memory-efficient exact attention☆23,185Updated this week
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆139Apr 30, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,694Aug 14, 2024Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆879Oct 30, 2023Updated 2 years ago
- Large Context Attention☆770Oct 13, 2025Updated 5 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Ring attention implementation with flash attention☆1,003Sep 10, 2025Updated 7 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Accessible large language models via k-bit quantization for PyTorch.☆8,107Updated this week
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- Minimalistic large language model 3D-parallelism training☆2,632Apr 2, 2026Updated last week
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 months ago
- Foundation Architecture for (M)LLMs☆3,133Apr 11, 2024Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Sep 9, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,404Apr 26, 2025Updated 11 months ago
- ☆22Jul 27, 2023Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆789Apr 24, 2023Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Long Range Arena for Benchmarking Efficient Transformers☆788Dec 16, 2023Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- RoFormer V1 & V2 pytorch☆522May 18, 2022Updated 3 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,596Apr 2, 2026Updated last week
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated 2 years ago
- kogpt를 oslo로 파인튜닝하는 예제.☆23Aug 26, 2022Updated 3 years ago
- Structured state space sequence models☆2,875Jul 17, 2024Updated last year
- ☆29May 4, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,936Dec 7, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,834Jun 17, 2025Updated 9 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,077Jan 23, 2026Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆6,410Mar 27, 2024Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,812Mar 27, 2026Updated 2 weeks ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆450Oct 16, 2024Updated last year