OpenEvaByte / evabyteLinks
EvaByte: Efficient Byte-level Language Models at Scale
☆114Updated 8 months ago
Alternatives and similar repositories for evabyte
Users that are interested in evabyte are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- ☆124Updated 10 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- ☆91Updated last year
- This is the official repository for Inheritune.☆117Updated 10 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- A repository for research on medium sized language models.☆77Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆112Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 11 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- ☆54Updated last year
- RWKV-7: Surpassing GPT☆103Updated last year
- ☆55Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- ☆26Updated 11 months ago
- Universal Reasoning Model☆99Updated 2 weeks ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 6 months ago
- Storing long contexts in tiny caches with self-study☆228Updated last month
- ☆81Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆137Updated 4 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Replicating O1 inference-time scaling laws☆91Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 10 months ago
- Token Omission Via Attention☆128Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago