jerber / lang-jepaLinks
☆118Updated 8 months ago
Alternatives and similar repositories for lang-jepa
Users that are interested in lang-jepa are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- Open source interpretability artefacts for R1.☆157Updated 4 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- smol models are fun too☆92Updated 9 months ago
- ☆138Updated 4 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆228Updated last month
- ☆98Updated 2 weeks ago
- rl from zero pretrain, can it be done? yes.☆250Updated last week
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 6 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆324Updated 9 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- Simple Transformer in Jax☆139Updated last year
- Exploring Applications of GRPO☆246Updated last month
- ☆89Updated 7 months ago
- look how they massacred my boy☆63Updated 10 months ago
- Draw more samples☆193Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆107Updated 4 months ago
- ☆101Updated last month
- ☆130Updated 5 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆448Updated 10 months ago
- ☆177Updated last week
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆177Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- Long context evaluation for large language models☆220Updated 5 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆245Updated last week