magicproduct / hash-hopLinks
Long context evaluation for large language models
☆224Updated 7 months ago
Alternatives and similar repositories for hash-hop
Users that are interested in hash-hop are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- Experiments on speculative sampling with Llama models☆125Updated 2 years ago
- ☆135Updated 7 months ago
- ☆105Updated this week
- ☆142Updated last month
- Storing long contexts in tiny caches with self-study☆201Updated last week
- Draw more samples☆194Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆187Updated 7 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 6 months ago
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆181Updated 2 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 8 months ago
- Code repository for the c-BTM paper☆107Updated 2 years ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 4 months ago
- rl from zero pretrain, can it be done? yes.☆277Updated 3 weeks ago
- Training-Ready RL Environments + Evals☆132Updated this week
- Understand and test language model architectures on synthetic tasks.☆233Updated last month
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆450Updated last year
- ☆122Updated 8 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- Commit0: Library Generation from Scratch☆169Updated 5 months ago
- smol models are fun too☆93Updated 11 months ago