magicproduct / hash-hopLinks
Long context evaluation for large language models
☆217Updated 3 months ago
Alternatives and similar repositories for hash-hop
Users that are interested in hash-hop are comparing it to the libraries listed below
Sorting:
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 6 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆239Updated 4 months ago
- prime-rl is a codebase for decentralized async RL training at scale☆341Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆102Updated 2 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆217Updated 2 weeks ago
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- PyTorch implementation of models from the Zamba2 series.☆182Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆303Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆339Updated 6 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆172Updated 3 weeks ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆235Updated 2 weeks ago
- A puzzle to learn about prompting☆128Updated 2 years ago
- Repository for the paper Stream of Search: Learning to Search in Language☆148Updated 4 months ago
- ☆180Updated 2 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆394Updated 7 months ago
- Normalized Transformer (nGPT)☆184Updated 7 months ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- ☆127Updated 3 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆120Updated 6 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆222Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆218Updated 6 months ago