OpenEvaByte / evabyteLinks
EvaByte: Efficient Byte-level Language Models at Scale
☆115Updated 9 months ago
Alternatives and similar repositories for evabyte
Users that are interested in evabyte are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- ☆123Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆121Updated 11 months ago
- ☆91Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆252Updated last year
- RWKV-7: Surpassing GPT☆104Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- ☆74Updated last year
- ☆112Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆235Updated 6 months ago
- ☆54Updated last year
- Esoteric Language Models☆110Updated 2 months ago
- ☆82Updated last year
- Code repository for the c-BTM paper☆108Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆140Updated 5 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆56Updated last year
- Universal Reasoning Model☆122Updated 3 weeks ago
- ☆41Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆125Updated 6 months ago
- ☆59Updated 2 months ago
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆247Updated 8 months ago