magicproduct / hash-hopLinks
Long context evaluation for large language models
☆225Updated 10 months ago
Alternatives and similar repositories for hash-hop
Users that are interested in hash-hop are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆229Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- ☆116Updated this week
- ☆135Updated 9 months ago
- MoE training for Me and You and maybe other people☆319Updated last week
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆73Updated 8 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆370Updated last year
- PyTorch implementation of models from the Zamba2 series.☆185Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆113Updated 8 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- ☆131Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆203Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- ☆151Updated 4 months ago
- Understand and test language model architectures on synthetic tasks.☆249Updated this week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 5 months ago
- ☆128Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆136Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- smolLM with Entropix sampler on pytorch☆149Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆126Updated 3 months ago