Exploring finetuning public checkpoints on filter 8K sequences on Pile
☆116Mar 22, 2023Updated 3 years ago
Alternatives and similar repositories for Long-context-transformers
Users that are interested in Long-context-transformers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- My explorations into editing the knowledge and memories of an attention network☆35Dec 8, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Multi-Domain Expert Learning☆67Jan 23, 2024Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆37May 15, 2023Updated 2 years ago
- ☆32Apr 9, 2026Updated 3 weeks ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Recursive Bayesian Networks☆11May 11, 2025Updated 11 months ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- ☆30Oct 3, 2022Updated 3 years ago
- ☆44Mar 29, 2023Updated 3 years ago
- ☆20Jul 12, 2023Updated 2 years ago
- ☆162Sep 15, 2023Updated 2 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- The dataset and code for PeerSum at EMNLP'23.☆16Oct 20, 2025Updated 6 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Anh - LAION's multilingual assistant datasets and models☆28Apr 5, 2023Updated 3 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- Generate visual podcasts about novels using open source models☆27Feb 15, 2023Updated 3 years ago
- ☆72May 22, 2023Updated 2 years ago
- Crawl & visualize ICLR papers and reviews.☆18Nov 5, 2022Updated 3 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆227Mar 25, 2026Updated last month
- A CLIP conditioned Decision Transformer.☆22Jul 14, 2021Updated 4 years ago
- Chrome extension for OA sites like arxiv, openreivew: 1. PDF back to abstract page, 2. Rename PDF page with paper title.☆18Oct 12, 2023Updated 2 years ago
- ☆18Mar 10, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Aug 3, 2021Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Feb 25, 2023Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Sep 4, 2023Updated 2 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Mar 14, 2022Updated 4 years ago
- ☆44Dec 28, 2022Updated 3 years ago
- BigKnow2022: Bringing Language Models Up to Speed☆16Mar 27, 2023Updated 3 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- Solution for the Foursquare - Location Matching competition☆14Jul 8, 2022Updated 3 years ago
- ☆18May 28, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆98Oct 20, 2023Updated 2 years ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Nov 11, 2024Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆99Dec 31, 2021Updated 4 years ago
- Minimal implementation of multiple PEFT methods for LLaMA fine-tuning☆13May 7, 2023Updated 2 years ago
- JAX implementation ViT-VQGAN☆82Sep 21, 2022Updated 3 years ago
- ☆14Nov 20, 2022Updated 3 years ago