tiiuae / Falcon-H1Links
All information and news with respect to Falcon-H1 series
☆106Updated 3 months ago
Alternatives and similar repositories for Falcon-H1
Users that are interested in Falcon-H1 are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆140Updated 5 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- [ICLR 2026] Official PyTorch Implementation of RLP: Reinforcement as a Pretraining Objective☆226Updated this week
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆260Updated last week
- ☆93Updated this week
- Universal Reasoning Model☆121Updated 2 weeks ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- Train, tune, and infer Bamba model☆138Updated 7 months ago
- Data recipes and robust infrastructure for training AI agents☆84Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆45Updated 6 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆127Updated 3 months ago
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆168Updated 5 months ago
- Code for Bolmo: Byteifying the Next Generation of Language Models☆115Updated last month
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Updated 2 months ago
- Memory optimized Mixture of Experts☆72Updated 6 months ago
- Esoteric Language Models☆109Updated 2 months ago
- GRadient-INformed MoE☆264Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆102Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆109Updated 8 months ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆89Updated 3 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆288Updated 2 months ago
- Code for ExploreTom☆90Updated 7 months ago
- ☆29Updated 2 months ago
- Lego for GRPO☆30Updated 8 months ago
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆251Updated 2 months ago