eqimp / hogwild_llmLinks
Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache
☆140Updated 5 months ago
Alternatives and similar repositories for hogwild_llm
Users that are interested in hogwild_llm are comparing it to the libraries listed below
Sorting:
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 3 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- [ICLR 2026] Tina: Tiny Reasoning Models via LoRA☆319Updated 4 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- Esoteric Language Models☆111Updated this week
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆90Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- Memory optimized Mixture of Experts☆73Updated 6 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆292Updated 2 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆220Updated 8 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆287Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆251Updated last year
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆261Updated last week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆273Updated this week
- All information and news with respect to Falcon-H1 series☆108Updated 4 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 9 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆356Updated 2 weeks ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated 3 months ago
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆625Updated last week
- ☆84Updated 2 months ago
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- ☆208Updated last year
- [Preprint] RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments☆177Updated 3 weeks ago