allenai / bolmo-coreLinks
Code for Bolmo: Byteifying the Next Generation of Language Models
☆112Updated 2 weeks ago
Alternatives and similar repositories for bolmo-core
Users that are interested in bolmo-core are comparing it to the libraries listed below
Sorting:
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆250Updated this week
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆169Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- Data recipes and robust infrastructure for training AI agents☆75Updated this week
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Official Project Page for Deep Delta Learning (https://huggingface.co/papers/2601.00417)☆282Updated this week
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆123Updated 5 months ago
- Train, tune, and infer Bamba model☆137Updated 7 months ago
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆250Updated last month
- ☆93Updated 2 months ago
- All information and news with respect to Falcon-H1 series☆95Updated 3 months ago
- Official JAX implementation of End-to-End Test-Time Training for Long Context☆214Updated last week
- ☆62Updated 6 months ago
- WeDLM: The fastest diffusion language model with standard causal attention and native KV cache compatibility, delivering real speedups ov…☆480Updated last week
- accompanying material for sleep-time compute paper☆118Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.☆450Updated 2 weeks ago
- Developer Asset Hub for NVIDIA Nemotron — A one-stop resource for training recipes, usage cookbooks, and full end-to-end reference exampl…☆314Updated this week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- Code for paper "The Markovian Thinker: Architecture-Agnostic Linear Scaling of Reasoning"☆329Updated last month
- Pivotal Token Search☆142Updated 3 weeks ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆87Updated 2 months ago
- ☆151Updated 3 weeks ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆355Updated 6 months ago
- RLP: Reinforcement as a Pretraining Objective☆222Updated 3 months ago
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆559Updated last month
- ☆106Updated 6 months ago
- Large multi-modal models (L3M) pre-training.☆224Updated 3 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆85Updated 9 months ago
- Open-source release accompanying Gao et al. 2025☆486Updated 3 weeks ago