druidowm / OccamLLMLinks
☆13Updated 11 months ago
Alternatives and similar repositories for OccamLLM
Users that are interested in OccamLLM are comparing it to the libraries listed below
Sorting:
- ☆135Updated 6 months ago
- look how they massacred my boy☆63Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- train entropix like a champ!☆20Updated 11 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- Train your own SOTA deductive reasoning model☆107Updated 7 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 4 months ago
- A 7B parameter model for mathematical reasoning☆40Updated 7 months ago
- ☆101Updated 9 months ago
- ☆103Updated 3 weeks ago
- Simple Transformer in Jax☆139Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 5 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- NSA Triton Kernels written with GPT5 and Opus 4.1☆65Updated last month
- ☆123Updated 7 months ago
- ☆124Updated 9 months ago
- Storing long contexts in tiny caches with self-study☆194Updated 3 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆184Updated 7 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 8 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆53Updated 5 months ago
- Modded vLLM to run pipeline parallelism over public networks☆39Updated 4 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆230Updated 2 months ago
- Sparse autoencoders for Contra text embedding models☆25Updated last year
- Open source interpretability artefacts for R1.☆160Updated 5 months ago
- ☆21Updated 9 months ago
- rl from zero pretrain, can it be done? yes.☆275Updated last week
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 6 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 5 months ago
- ☆40Updated last year
- An introduction to LLM Sampling☆79Updated 9 months ago