Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
☆654Dec 27, 2024Updated last year
Alternatives and similar repositories for MEGABYTE-pytorch
Users that are interested in MEGABYTE-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆226Mar 25, 2026Updated 2 weeks ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Jan 6, 2025Updated last year
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆54Jul 2, 2023Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆125Mar 20, 2026Updated 2 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,458Mar 30, 2026Updated last week
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- My attempts at applying Soundstream design on learned tokenization of text and then applying hierarchical attention to text generation☆90Oct 11, 2024Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,812Mar 27, 2026Updated last week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,158Jan 11, 2024Updated 2 years ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆991Jan 30, 2024Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Oct 17, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Fast and memory-efficient exact attention☆23,185Updated this week
- Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch☆1,547Apr 24, 2025Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆252Jan 23, 2024Updated 2 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,865Jun 10, 2024Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆104Oct 10, 2023Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,183Nov 27, 2024Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆335Jun 6, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,618Jan 12, 2025Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆251Jun 6, 2025Updated 10 months ago
- An Open-source Streaming High-fidelity Neural Audio Codec☆501Mar 4, 2025Updated last year
- ImageBind One Embedding Space to Bind Them All☆9,012Nov 21, 2025Updated 4 months ago
- Foundation Architecture for (M)LLMs☆3,133Apr 11, 2024Updated last year
- The repository for the code of the UltraFastBERT paper☆518Mar 24, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,208Jul 11, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Efficient Transformers with Dynamic Token Pooling☆68May 20, 2023Updated 2 years ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,936Dec 7, 2024Updated last year
- Mamba SSM architecture☆17,902Updated this week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆343Feb 23, 2025Updated last year