microsoft / ArchScaleLinks
Simple & Scalable Pretraining for Neural Architecture Research
☆306Updated last month
Alternatives and similar repositories for ArchScale
Users that are interested in ArchScale are comparing it to the libraries listed below
Sorting:
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Memory optimized Mixture of Experts☆72Updated 5 months ago
- Storing long contexts in tiny caches with self-study☆231Updated last month
- Open-source release accompanying Gao et al. 2025☆490Updated last month
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆277Updated this week
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆139Updated 5 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆256Updated last week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆186Updated 11 months ago
- Dion optimizer algorithm☆416Updated 2 weeks ago
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆259Updated 7 months ago
- All information and news with respect to Falcon-H1 series☆102Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆370Updated last year
- PyTorch-native post-training at scale☆595Updated this week
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- 👷 Build compute kernels☆213Updated this week
- GRadient-INformed MoE☆264Updated last year
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- ☆224Updated last month
- MoE training for Me and You and maybe other people☆319Updated 2 weeks ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆269Updated this week
- ☆206Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆226Updated 10 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- Curated collection of community environments☆204Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆331Updated 2 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆283Updated last month
- Load compute kernels from the Hub☆376Updated this week