HazyResearch / cartridgesLinks
Storing long contexts in tiny caches with self-study
☆226Updated 2 weeks ago
Alternatives and similar repositories for cartridges
Users that are interested in cartridges are comparing it to the libraries listed below
Sorting:
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated 2 weeks ago
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- MoE training for Me and You and maybe other people☆239Updated last week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 8 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- Long context evaluation for large language models☆224Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 9 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- Curated collection of community environments☆195Updated last week
- ☆115Updated 2 weeks ago
- ☆136Updated 9 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆336Updated this week
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆229Updated this week
- Super basic implementation (gist-like) of RLMs with REPL environments.☆286Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- ☆234Updated 5 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆69Updated 4 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆63Updated 7 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.☆246Updated 2 months ago
- Memory optimized Mixture of Experts☆72Updated 4 months ago
- ☆68Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆84Updated 9 months ago