facebookresearch / blt
Code for BLT research paper
☆1,513Updated last week
Alternatives and similar repositories for blt:
Users that are interested in blt are comparing it to the libraries listed below
- Training Large Language Model to Reason in a Continuous Latent Space☆1,076Updated 3 months ago
- Large Concept Models: Language modeling in a sentence representation space☆2,098Updated 2 months ago
- Recipes to scale inference-time compute of open models☆1,058Updated 2 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,043Updated 2 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆865Updated 2 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆1,520Updated 2 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,808Updated this week
- Pretraining code for a large-scale depth-recurrent language model☆745Updated last week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆991Updated last month
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch☆1,296Updated last week
- A bibliography and survey of the papers surrounding o1☆1,187Updated 5 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆577Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,305Updated this week
- Bringing BERT into modernity via both architecture changes and scaling☆1,329Updated last month
- Muon is Scalable for LLM Training☆1,029Updated 3 weeks ago
- SONAR, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders.☆732Updated 3 weeks ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,438Updated last week
- ☆1,017Updated 4 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆551Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆716Updated last month
- NanoGPT (124M) in 3 minutes☆2,501Updated this week
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,184Updated last week
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,171Updated 2 weeks ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,990Updated 8 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆882Updated last week
- Large Reasoning Models☆802Updated 4 months ago
- Dream 7B, a large diffusion language model☆572Updated 2 weeks ago
- Helpful tools and examples for working with flex-attention☆726Updated 2 weeks ago
- AllenAI's post-training codebase☆2,913Updated this week
- System 2 Reasoning Link Collection☆826Updated last month