druidowm / OccamLLMLinks
☆14Updated last year
Alternatives and similar repositories for OccamLLM
Users that are interested in OccamLLM are comparing it to the libraries listed below
Sorting:
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- ☆136Updated 10 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- train entropix like a champ!☆20Updated last year
- Open source interpretability artefacts for R1.☆167Updated 8 months ago
- ☆105Updated last year
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- Collection of LLM completions for reasoning-gym task datasets☆30Updated 6 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 6 months ago
- ☆116Updated last week
- ☆151Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 10 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 11 months ago
- Curated collection of community environments☆204Updated last week
- ☆123Updated 10 months ago
- look how they massacred my boy☆63Updated last year
- A 7B parameter model for mathematical reasoning☆41Updated 11 months ago
- Storing long contexts in tiny caches with self-study☆231Updated last month
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 8 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Updated 9 months ago
- ☆68Updated 7 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆126Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year